Computer software becomes increasingly complex and makes systems and organizations vulnerable to hacker attacks. That’s why companies are turning to outside experts to track down the errors in their software. One of the methods is participation in bug bounty programs.
Companies offer rewards to ethical hackers who discover bugs or security weaknesses. They are often run by big software publishers so that they can fix these issues before they’re discovered and exploited by criminals.
Google’s Android, Chrome, and Play platforms continue to be vulnerability-rich environments. In 2021 Google paid a record $8.7 million in rewards to 696 third-party bug hunters from 62 countries who discovered and reported thousands of vulnerabilities in the company’s technologies. It’s a nearly 30% increase from the $6.7 million in 2020.
Companies often hire a team to test the security of their website or system before deployment. But what happens when new features or updates are pushed? What about the bugs or weaknesses that these teams miss? That is why it makes sense to sign up for a bug bounty program to make sure that the system gets tested by a vast range of freelance security experts, not just one team. Bug bounty programs also ensure that the system is always being tested, not just at one point in time. For a mid-size company, it could be a way to save money. After all, an in-house team of cybersecurity experts may be simply too expensive for them. In bug bounty programs cybersecurity experts are rewarded when they discover a new bug, the time they spend to do so doesn’t matter for a company.
There are two most popular variants of the bug bounty program: ethical hackers work directly with the company or with the use of an intermediate platform. This intermediary can provide verification of the cybersecurity expert’s work before notification to the company. Typically, a hacker receives a monetary reward for successful submission. For less critical vulnerabilities they can get branded company merchandise. The prize offered should be equivalent to the severity of the vulnerability discovered and the effort the ethical hacker has made. If the compensation offered is unfair, the company can expect negative backlash. In 2013 Yahoo had to change its bug bounty policies after it offered t-shirts to bug hunters for successfully finding critical vulnerabilities. After that Yahoo’s program reputation was damaged. This part is still often criticized by the community as unfair as the wages paid by standard penetration testing are much higher and not dependent on the number of reported findings.
Some bug bounty ecosystems introduce reputation points and associated leaderboards to reward successful submissions. These reputation points are often the criteria for admission to private programs. While direct programs are often public, allowing for submissions from anyone, in private programs only selected security researchers can see the program details and participate. Private programs allow some organizations to test procedures before going public, some of them remain private for a significant amount of time or permanently. Consequently, these programs avoid some issues prevalent in public operations.
A bug bounty is a side activity for many security researchers, but there is also a group of people who have made bug bounty a way of life. A 30-year-old hacker from Romania worked for his first million in those programs for two years. Such a result is certainly impressive, but it is worth remembering that bug bounty programs do not mean high revenues for everyone. Different companies have a different approach to when the prize should be paid out, some do it when the reported bug is accepted, others only when it is fixed, and this can take many months.
Very often, there is also a dispute about how to classify the severity of the vulnerability. Most companies are friendly to bug hunters cooperating with them, unfortunately, this is not a common standard. The rules of the game are determined by the company, in the event of a disagreement, some researchers break the rules and – giving up the prize – publicly disclose the details of the vulnerability. This, in turn, can lead to legal issues and costs on both sides of the dispute.
At first glance, the bug bounty program looks like an ideal solution: it enables constant testing of system security and does not ruin the company’s budget. The reality is not so colorful. A significant issue in bug bounty programs is the high volume of low-quality submissions. The poor-quality report is the result of racing to submit a vulnerability. Many ethical hackers look to maximize the number of submissions rather than focusing on specific vulnerabilities. The reason behind it is simple, it’s a more profitable tactic.
One of the key factors influencing the effectiveness of bug hunters is an “arms race” in the category of finding assets. Companies do not always inform about any subdomains or subpages within the scope of the program, because of that it is common to run tools that search for additional targets. The methodologies are different: spidering, brute-forcing, dictionary attacks, they are used at the same time with the fastest available tools and cloud systems. For example, the Axiom tool can divide the work into hundreds of machines in the cloud, which will be deleted a second after the work is finished.
There is also a problem with the duplicate submissions. The race to submit as the first often leads to reports lacking essential details. A company or platform requires from the ethical hacker further information. At this time, another hacker may submit a more significantly detailed report for the same vulnerability. The second report, although possibly more beneficial to the organization, according to the rules, is a duplicate. The treatment of duplicates varies. Synack addresses this issue by setting a 48-hours window for submissions, all reports are accepted. After two days duplicates are grouped together and the one with the most detailed report gets the bounty. Some platforms do not monetarily reward duplicates. This mechanism discourages detailed submissions.
Another disturbing trend within bug bounty programs is the result of the probability of finding a given number of bugs. As the average bounty per program scales super-linearly, while the probability of bug discovery decays rapidly. After some time switching to another program is more profitable than making an in-depth analysis of the old one. There is a potential problem with incomplete coverage possibly leading to a false perception of security.
There is also a lot of controversy in cases where a security researcher has found and reported a bug to a company that does not have an official program. This creates potential legal issues; bug hunters could be seen to be extorting the target rather than acting for good. Above all companies and ethical hackers don’t have binding contractual relationships. There is always the risk that a bug hunter could choose to sell the vulnerabilities they discover on the black market, or even double bluff their client and ask for payment as well as sell the information on the dark web.
Cybersecurity expert Troy Hunt describes the phenomenon of the so-called Beg Bounty. In this scenario, a company receives from researcher unexpected information about a very serious vulnerability. The details will be disclosed in a moment, but first, you need to determine the amount of the payment. Often this particularly important vulnerability is something completely irrelevant from a security point of view: unrealistic clickjacking, missing some HTTP header, or loose SPF record configuration.
Companies don’t have to choose between bug bounty programs or a team of experts to profoundly test their security. The best model is a combination of two solutions, a third-party penetration testing performed annually or after a major system update and a well-organized bug bounty program to complement the existing vulnerability management process. In-depth tests are an excellent tool to find and fix security weaknesses. Bug bounty programs can help to secure companies in the gaps between penetration tests.