With contributions from Elliot Wong and Calvin Lee
Zoom is an easy-to-use and feature-rich video conferencing software whose usage skyrocketed amid the COVID-19 pandemic. In fact, we also use it at Oursky. However, Zoom’s popularity came with outcry over privacy and security. It’s been involved in a string of security incidents, including the infamous Zoombombing, and is found to be riddled with vulnerabilities and issues with its supposed use of end-to-end encryption as uncovered by Citizen Labs of University of Toronto.
On the bright side, this is a good lesson – and clear reminder – for developers to strengthen the security of their software. Here are five dos and don’ts that we think every web and mobile application should learn to avoid Zoom’s mistakes:
Beware of enumeration / brute force of IDs in your URL.
What happened: Zoombombing, also known as uninvited people or internet trolls disrupting Zoom meetings.
What’s wrong with it: It’s a common mistake for developers or end users to assume that URLs are secret. But since the meeting ID included in a Zoom URL is simply a string of digits, the range is limited. Hackers or malicious users can simply brute-force and enumerate all possible combinations of URLs. This would enable them to guess the data size or velocity of growth (in case the IDs are generated sequentially) and enumerate all possible combinations and gain access to data (i.e., Zoombombing). In short, since a Zoom meeting ID just comprises 10 random digits, it’s trivial to run room-scanning on Zoom meeting rooms that are not guarded with a password.
Lesson learned: Secure your software against enumeration and brute force. Here are a few things that developers should consider:
- Avoid using simple IDs such as primary keys or short integers, especially when one is part of an URL. Use universally unique identifiers (UUIDs) or universally unique lexicographically sortable identifier (ULID) where information like data size or velocity of growth are well hidden and hence not prone to being leaked.
- If the meeting ID carried by an URL must be human-readable and memorable (like in Zoom’s case), it’s very likely to be too simple. In such case, the UX of room creation should be designed in a way that the users are encouraged to guard the room with password and prompted to understand the risks if they don’t.
- Use a Web Application Firewall (WAF) or gateway to rate-limit access to the URLs. This doesn’t eliminate the risk but helps significantly lower the chances of getting the IDs enumerated, as the attacker will only get a few tries to guess room IDs.
Understand the difference between HTTPS and E2EE – and don’t invent your own crypto!
What happened: Users were misled to believe that end-to-end encryption is applied to all of Zoom’s meetings, when instead a wrong encryption algorithm is adopted.
What’s wrong with it: Zoom doesn’t use end-to-end encryption and instead just encrypts the connection between devices and Zoom’s servers. The adopted transport protocol is a bespoke extension of the existing Real-time Transport Protocol (RTP), which uses Advanced Encryption Standard (AES) in electronic codebook (ECB) mode. This is an inadequate solution, because the input patterns are preserved even after encryption, unlike other industry-standard streaming encryption protocols such as Secure Real-time Transport Protocol (SRTP).
Lesson learned: Encrypt the data properly.
It’s innate for businesses and users to ask how secure a dataflow is. Software makers can simply claim “We have encrypted it,” but “encryption” can have different meanings. A basic implementation can be “encryption at transport” by applying the Transport Layer Security (TLS) protocol, where data is encrypted while being transferred between a client and a server or services.
The Hypertext Transfer Protocol Secure (HTTPS) protocol is an example of this, where Transport Layer Security (TLS) is applied to encrypt HTTP requests and responses. This prevents parties other than the sender (i.e., a web client) and the server from accessing the content of data packages sent in-between.
As its name suggests, end-to-end encryption (E2EE) means transferred data is encrypted between the ends. Let’s say Alice is communicating with Bob on a messaging platform with E2EE implemented. Once a message is encrypted and sent from Alice’s device, it stays encrypted until it reaches Bob’s, where decryption will then take place. All third parties including the chat provider’s server know nothing about the message content. This is different from HTTPS, where data encryption terminates as soon as a package hits the server.
Due to its limitations on server and complexity of implementation, E2EE is still not widely adopted in application development. Since the server doesn’t have access to the transferred data, the business nature of this application is likely to be affected as well. As a result, many communication platforms still don’t have E2EE implemented. Not being able to provide such a feature is one thing, but misleading users to believe it is implemented is essentially false advertising. Users may be unknowingly exposed to security and privacy risks.
More junior developers may also think that it’s a good idea (and cool) to implement their own cryptosystem. In fact, cryptography is extremely complicated and encryption protocol or cryptographic architecture requires multiple reviews and cryptanalysis. Stick with tried-and-tested protocols or software libraries instead of rolling your own cryptosystem.
Don’t encode predictable secrets in users’ input.
What happened: There was a bug on Zoom where an attacker can activate any account and connect it with Facebook by using the value param code in the sign-in activation link.
What’s wrong with it: Developers can expose predictable secrets and use it for verification purposes.
Lesson learned: While the bug seems trivial, there are different ways a developer could make similar mistakes. Here’s an example: Instead of using the same value param code in sign-in and verification, the developer uses a hash code of md5(email_addr + salt) as the value param in the verification link without any additional checking. One could easily brute-force the md5 hash to hack the verification endpoint. The rule of thumb is to never trust user input. Having an understanding of cryptographic hash function could also help developers avoid using the wrong algorithm.
Don’t assume that email addresses from the same domain is the same group.
What happened: Zoom’s “Company Directory” setting automatically adds other people to a user’s lists of contacts if they signed up with an email address that shares the same domain. This exposes people who signed up with a personal or free ISP email address to strangers.
What’s wrong with it: I can’t recall when it started — probably Yammer? It had a feature where users were automatically added to the same company directory if their email addresses share the same domain. It was an interesting idea to help reduce friction between users (whose email addresses share the same domain) signing up for your software as a service (SaaS). The service provider/vendor usually configures this feature as optional – it is only enabled if you enroll your own domain name to the services and explicitly activate it. For example, some software like 1Password requires an additional approval even after someone signed up with the same domain name.
Lesson learned: To allow user discovery within the same organization by domain name in email address, make sure it is whitelisted. Allow your users to enable it explicitly.
Avoid pattern/static filename and secure your S3 bucket or cloud file storage.
What happened: Since Zoom use the same filename for video recording (i.e., zoom_0, and zoom_N if you have more than one recording per meeting), thousands of video recordings became exposed and searchable on the internet including YouTube, Vimeo, and unsecure S3 buckets.
What’s wrong with it: Technically speaking, it is not entirely Zoom’s fault — users who share their video recordings online will unavoidably expose themselves to having their data leaked. However, it is also true that for a well-known application, using a static filename or with an explicit pattern is not recommended. This increases the risks of making the export searchable. Unsecure and misconfigured S3 buckets have also been heavily involved in many data breaches.
Lesson learned: Avoid static or filename with clear patterns if you’re providing users the functionality of exporting sensitive data. Additionally, learn how to properly configure the security of your S3 bucket or cloud file storage services.
Subscribe to our blog for more expert advice for developers – by developers: