Gonna repeat myself since iMessage hasn’t improved one bit after four years. I also added some edits since attacks and Signal have improved.
iMessage has several problems:
iMessage uses RSA instead of Diffie-Hellman. This means there is no forward secrecy. If the endpoint is compromised at any point, it allows the adversary who has
a) been collecting messages in transit from the backbone, or
b) in cases where clients talk to server over forward secret connection, who has been collecting messages from the IM server
to retroactively decrypt all messages encrypted with the corresponding RSA private key. With iMessage the RSA key lasts practically forever, so one key can decrypt years worth of communication.
I’ve often heard people say “you’re wrong, iMessage uses unique per-message key and AES which is unbreakable!” Both of these are true, but the unique AES-key is delivered right next to the message, encrypted with the public RSA-key. It’s like transport of safe where the key to that safe sits in a glass box that’s strapped against the safe.
The RSA key strength is only 1280 bits. This is dangerously close to what has been publicly broken. On Feb 28 2023, Boudet et. al broke a 829-bit key.
1280-bit RSA key has 79 bits of symmetric security. 829-bit RSA key has ~68 bits of symmetric security. So compared to what has publicly been broken, iMessage RSA key is only 11 bits, or, 2048 times stronger.
The same site estimates that in an optimistic scenario, intelligence agencies can only factor about 1507-bit RSA keys in 2024. The conservative (security-consious) estimate assumes they can break 1708-bit RSA keys at the moment.
(Sidenote: Even the optimistic scenario is very close to 1536-bit DH-keys OTR-plugin uses, you might want to switch to OMEMO/Signal protocol ASAP).
Under e.g. keylength.com, no recommendation suggest using anything less than 2048 bits for RSA or classical Diffie-Hellman. iMessage is badly, badly outdated in this respect.
iMessage uses digital signatures instead of MACs. This means that each sender of message generates irrefutable proof that they, and only could have authored the message. The standard practice since 2004 when OTR was released, has been to use Message Authentication Codes (MACs) that provide deniability by using a symmetric secret, shared over Diffie-Hellman.
This means that Alice who talks to Bob can be sure received messages came from Bob, because she knows it wasn’t her. But it also means she can’t show the message from Bob to a third party and prove Bob wrote it, because she also has the symmetric key that in addition to verifying the message, could have been used to sign it. So Bob can deny he wrote the message.
Now, this most likely does not mean anything in court, but that is no reason not to use best practices, always.
The digital signature algorithm is ECDSA, based on NIST P-256 curve, which according to https://safecurves.cr.yp.to/ is not cryptographically safe. Most notably, it is not fully rigid, but manipulable: “the coefficients of the curve have been generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90”.
iMessage is proprietary: You can’t be sure it doesn’t contain a backdoor that allows retrieval of messages or private keys with some secret control packet from Apple server
iMessage allows undetectable man-in-the-middle attack. Even if we assume there is no backdoor that allows private key / plaintext retrieval from endpoint, it’s impossible to ensure the communication is secure. Yes, the private key never leaves the device, but if you encrypt the message with a wrong public key (that you by definition need to receive over the Internet), you might be encrypting messages to wrong party.
You can NOT verify this by e.g. sitting on a park bench with your buddy, and seeing that they receive the message seemingly immediately. It’s not like the attack requires that some NSA agent hears their eavesdropping phone 1 beep, and once they have read the message, they type it to eavesdropping phone 2 that then forwards the message to the recipient. The attack can be trivially automated, and is instantaneous.
So with iMessage the problem is, Apple chooses the public key for you. It sends it to your device and says: “Hey Alice, this is Bob’s public key. If you send a message encrypted with this public key, only Bob can read it. Pinky promise!”
Proper messaging applications use what are called public key fingerprints that allow you to verify off-band, that the messages your phone outputs, are end-to-end encrypted with the correct public key, i.e. the one that matches the private key of your buddy’s device.
EDIT: This has actually has some improvements made a month ago! Please see the discussion in replies.
When your buddy buys a new iDevice like laptop, they can use iMessage on that device. You won’t get a notification about this, but what happens on the background is, that new device of your buddy generates an RSA key pair, and sends the public part to Apple’s key management server. Apple will then forward the public key to your device, and when you send a message to that buddy, your device will first encrypt the message with the AES key, and it will then encrypt the AES key with public RSA key of each device of your buddy. The encrypted message and the encrypted AES-keys are then passed to Apple’s message server where they sit until the buddy fetches new messages for some device.
Like I said, you will never get a notification like “Hey Alice, looks like Bob has a brand new cool laptop, I’m adding the iMessage public keys for it so they can read iMessages you send them from that device too”.
This means that the government who issues a FISA court national security request (stronger form of NSL), or any attacker who hacks iMessage key management server, or any attacker that breaks the TLS-connection between you and the key management server, can send your device a packet that contains RSA-public key of the attacker, and claim that it belongs to some iDevice Bob has.
You could possibly detect this by asking Bob how many iDevices they have, and by stripping down TLS from iMessage and seeing how many encrypted AES-keys are being output. But it’s also possible Apple can remove keys from your device too to keep iMessage snappy: they can very possibly replace keys in your device. Even if they can’t do that, they can wait until your buddy buys a new iDevice, and only then perform the man-in-the-middle attack against that key.
To sum it up, like Matthew Green said[1]: “Fundamentally the mantra of iMessage is “keep it simple, stupid”. It’s not really designed to be an encryption system as much as it is a text message system that happens to include encryption.”
Apple has great security design in many parts of its ecosystem. However, iMessage is EXTREMELY bad design, and should not be used under any circumstances that require verifiable privacy.
In comparison, Signal
Uses Diffie Hellman + Kyber, not RSA
Uses Curve25519 that is a safe curve with 128-bits of symmetric security, not 79 bits like iMessage.
Uses Kyber key exchange for post quantum security
Uses MACs instead of digital signatures
Is not just free and open source software, but has reproducible builds so you can be sure your binary matches the source code
Features public key fingerprints (called safety numbers) that allows verification that there is no MITM attack taking place
Does not allow key insertion attacks under any circumstances: You always get a notification that the encryption key changed. If you’ve verified the safety numbers and marked the safety numbers “verified”, you won’t even be able to accidentally use the inserted key without manually approving the new keys.
I think it would help to summarize the major issue with iMessage and have it at the top.
The RSA encrypting the AES with the message content is so face-palmingly bad that you really don’t need to read any further, and thd rest is just more evidence of issues.
Well done. I had no idea. Saving your summary, because it’s so staggering. Wish I could upvote you a hundred times. This is a huge issue.
We literally know that the FBI at one point was unable to break into an iPhone, and then a few days later was able to break into it. Apple clearly let them in the back door after negotiating the condition that they could deny and act all upset about it.
And then they launched a whole privacy - focused marketing campaign immediately afterwards. It’s all laughable transparent, yet you still have moronic pop-security YouTubers repeating that bullshit that Apple is a secure platform.
Um no, the FBI used software developed by an Israel based company to hack into it. This is well documented. Isreal has been creating and selling iPhone hacking software to nation states for years. They also sold out to the Saudi’s who used to it to track and kill the American resident Jamal Khashoggi.
Your right, I don’t think those Israel companies got a backdoor from apple. A “magic packet” backdoor is too hard to hide into the code and would tank their trust FAST. However, They do encrypt the system files to prevent reverse engineering. iPhones then have enough bad practices (see: the IMessage post) (some of them oddly specific) to make a software developer cry in the corner. Incompetence, UX tunnel vision or intentional flaws. (honestly I don’t know the answer)
In iOS 13 or later and iPadOS 13.1 or later, devices may use an Elliptic Curve Integrated Encryption Scheme (ECIES) encryption instead of RSA encryption
If you’re curious about it all, I’d suggest studying some notes from the protocol researchers instead of taking to the pitchforks immediately. Here’s one good post on the topic.
Thanks for bringing that info here. I was already using Signal but I was concerned about their approach to notification security when I read this news this week.
Here’s some info I found on the reddit Signal sub, not verified but just comments:
*All that goes through the Google or Apple push notification systems is “you’ve got a push notification.”
It’s up to your Signal app to then wake up, contact Signal’s servers, and see what the notification was. Message content and sender identity never pass through Google/Apple push infrastructure.
*Signal does not use google notification system is my understanding.
For apps that do, google only gets metadata, that is not content of the message.
2nd comment is not quite right, it does use the google notification system if you install it from the Play store. You can avoid that by installing the APK downloaded from the Signal site.
Metadata that is unencrypted could include things that identify who the message is to or from, and the timestamps of the messages. Seems like we can only be sure the content of messages is secure, but not the metadata. >
Tangentially related, if you use iMessage, I’d recommend you switch to Signal.
text below from a hackernews comment:
Gonna repeat myself since iMessage hasn’t improved one bit after four years. I also added some edits since attacks and Signal have improved.
iMessage has several problems:
a) been collecting messages in transit from the backbone, or
b) in cases where clients talk to server over forward secret connection, who has been collecting messages from the IM server
to retroactively decrypt all messages encrypted with the corresponding RSA private key. With iMessage the RSA key lasts practically forever, so one key can decrypt years worth of communication.
I’ve often heard people say “you’re wrong, iMessage uses unique per-message key and AES which is unbreakable!” Both of these are true, but the unique AES-key is delivered right next to the message, encrypted with the public RSA-key. It’s like transport of safe where the key to that safe sits in a glass box that’s strapped against the safe.
To compare these key sizes, we use https://www.keylength.com/en/2/
1280-bit RSA key has 79 bits of symmetric security. 829-bit RSA key has ~68 bits of symmetric security. So compared to what has publicly been broken, iMessage RSA key is only 11 bits, or, 2048 times stronger.
The same site estimates that in an optimistic scenario, intelligence agencies can only factor about 1507-bit RSA keys in 2024. The conservative (security-consious) estimate assumes they can break 1708-bit RSA keys at the moment.
(Sidenote: Even the optimistic scenario is very close to 1536-bit DH-keys OTR-plugin uses, you might want to switch to OMEMO/Signal protocol ASAP).
Under e.g. keylength.com, no recommendation suggest using anything less than 2048 bits for RSA or classical Diffie-Hellman. iMessage is badly, badly outdated in this respect.
This means that Alice who talks to Bob can be sure received messages came from Bob, because she knows it wasn’t her. But it also means she can’t show the message from Bob to a third party and prove Bob wrote it, because she also has the symmetric key that in addition to verifying the message, could have been used to sign it. So Bob can deny he wrote the message.
Now, this most likely does not mean anything in court, but that is no reason not to use best practices, always.
The digital signature algorithm is ECDSA, based on NIST P-256 curve, which according to https://safecurves.cr.yp.to/ is not cryptographically safe. Most notably, it is not fully rigid, but manipulable: “the coefficients of the curve have been generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90”.
iMessage is proprietary: You can’t be sure it doesn’t contain a backdoor that allows retrieval of messages or private keys with some secret control packet from Apple server
iMessage allows undetectable man-in-the-middle attack. Even if we assume there is no backdoor that allows private key / plaintext retrieval from endpoint, it’s impossible to ensure the communication is secure. Yes, the private key never leaves the device, but if you encrypt the message with a wrong public key (that you by definition need to receive over the Internet), you might be encrypting messages to wrong party.
You can NOT verify this by e.g. sitting on a park bench with your buddy, and seeing that they receive the message seemingly immediately. It’s not like the attack requires that some NSA agent hears their eavesdropping phone 1 beep, and once they have read the message, they type it to eavesdropping phone 2 that then forwards the message to the recipient. The attack can be trivially automated, and is instantaneous.
So with iMessage the problem is, Apple chooses the public key for you. It sends it to your device and says: “Hey Alice, this is Bob’s public key. If you send a message encrypted with this public key, only Bob can read it. Pinky promise!”
Proper messaging applications use what are called public key fingerprints that allow you to verify off-band, that the messages your phone outputs, are end-to-end encrypted with the correct public key, i.e. the one that matches the private key of your buddy’s device.
EDIT: This has actually has some improvements made a month ago! Please see the discussion in replies.
When your buddy buys a new iDevice like laptop, they can use iMessage on that device. You won’t get a notification about this, but what happens on the background is, that new device of your buddy generates an RSA key pair, and sends the public part to Apple’s key management server. Apple will then forward the public key to your device, and when you send a message to that buddy, your device will first encrypt the message with the AES key, and it will then encrypt the AES key with public RSA key of each device of your buddy. The encrypted message and the encrypted AES-keys are then passed to Apple’s message server where they sit until the buddy fetches new messages for some device.
Like I said, you will never get a notification like “Hey Alice, looks like Bob has a brand new cool laptop, I’m adding the iMessage public keys for it so they can read iMessages you send them from that device too”.
This means that the government who issues a FISA court national security request (stronger form of NSL), or any attacker who hacks iMessage key management server, or any attacker that breaks the TLS-connection between you and the key management server, can send your device a packet that contains RSA-public key of the attacker, and claim that it belongs to some iDevice Bob has.
You could possibly detect this by asking Bob how many iDevices they have, and by stripping down TLS from iMessage and seeing how many encrypted AES-keys are being output. But it’s also possible Apple can remove keys from your device too to keep iMessage snappy: they can very possibly replace keys in your device. Even if they can’t do that, they can wait until your buddy buys a new iDevice, and only then perform the man-in-the-middle attack against that key.
To sum it up, like Matthew Green said[1]: “Fundamentally the mantra of iMessage is “keep it simple, stupid”. It’s not really designed to be an encryption system as much as it is a text message system that happens to include encryption.”
Apple has great security design in many parts of its ecosystem. However, iMessage is EXTREMELY bad design, and should not be used under any circumstances that require verifiable privacy.
In comparison, Signal
Uses Diffie Hellman + Kyber, not RSA
Uses Curve25519 that is a safe curve with 128-bits of symmetric security, not 79 bits like iMessage.
Uses Kyber key exchange for post quantum security
Uses MACs instead of digital signatures
Is not just free and open source software, but has reproducible builds so you can be sure your binary matches the source code
Features public key fingerprints (called safety numbers) that allows verification that there is no MITM attack taking place
Does not allow key insertion attacks under any circumstances: You always get a notification that the encryption key changed. If you’ve verified the safety numbers and marked the safety numbers “verified”, you won’t even be able to accidentally use the inserted key without manually approving the new keys.
So do yourself a favor and switch to Signal ASAP.
[1] https://blog.cryptographyengineering.com/2015/09/09/lets-tal…
Wow.
I think it would help to summarize the major issue with iMessage and have it at the top.
The RSA encrypting the AES with the message content is so face-palmingly bad that you really don’t need to read any further, and thd rest is just more evidence of issues.
Well done. I had no idea. Saving your summary, because it’s so staggering. Wish I could upvote you a hundred times. This is a huge issue.
We literally know that the FBI at one point was unable to break into an iPhone, and then a few days later was able to break into it. Apple clearly let them in the back door after negotiating the condition that they could deny and act all upset about it.
And then they launched a whole privacy - focused marketing campaign immediately afterwards. It’s all laughable transparent, yet you still have moronic pop-security YouTubers repeating that bullshit that Apple is a secure platform.
Um no, the FBI used software developed by an Israel based company to hack into it. This is well documented. Isreal has been creating and selling iPhone hacking software to nation states for years. They also sold out to the Saudi’s who used to it to track and kill the American resident Jamal Khashoggi.
Your right, I don’t think those Israel companies got a backdoor from apple. A “magic packet” backdoor is too hard to hide into the code and would tank their trust FAST. However, They do encrypt the system files to prevent reverse engineering. iPhones then have enough bad practices (see: the IMessage post) (some of them oddly specific) to make a software developer cry in the corner. Incompetence, UX tunnel vision or intentional flaws. (honestly I don’t know the answer)
I know, right?
Unfortunately ignorance of the masses (myself included, and I try to stay current) let’s them get away with this stuff.
Too many people say “well, I don’t do anything wrong, so why be concerned”, as if people have never been railroaded before (Ruby Ridge anyone?).
Seeing the kind of data I know is known about me is terrifying, and I’ve been working for years to reduce it. My current effort is a final degoogle.
Messaging is a tough one to crack, people still use SMS as much as I hate it.
I wouldn’t really classify Ruby ridge as a rail-roading.
This is a guy who uprooted his family to move across the country so he could hang out with terrorists who shared Hitler-loving beliefs.
He then sold a sawed off shotgun to a man he believed was one of those terrorists.
We can definitely criticize law enforcement for every single they did from the inception of the case, but Weaver was not innocent.
(from apple docs).
If you’re curious about it all, I’d suggest studying some notes from the protocol researchers instead of taking to the pitchforks immediately. Here’s one good post on the topic.
No way governments spying on their own people I could never believe such an act would be tolerated.
Thanks for bringing that info here. I was already using Signal but I was concerned about their approach to notification security when I read this news this week.
Here’s some info I found on the reddit Signal sub, not verified but just comments:
2nd comment is not quite right, it does use the google notification system if you install it from the Play store. You can avoid that by installing the APK downloaded from the Signal site.
Metadata that is unencrypted could include things that identify who the message is to or from, and the timestamps of the messages. Seems like we can only be sure the content of messages is secure, but not the metadata. >