By Laurie Mercer, Security Engineer at HackerOne
My first modem was a 14.4 board rate box that blinked orange and green by the phone, emitting strange noises that disturbed our cat Conte and stopped my parents from being able to make telephone calls. I spent most of my teenage years customising Winamp and Gentoo Linux, installing mad screensavers, and helping search for extraterrestrial life, before entering University in 2001: the year of both a Space Oddysey and the Agile Manifesto.
Graduating in the new School of Computer Science (renamed in 2002), I joined a graduate scheme as a software engineer, where I coded in Java, C++ STL, .NET 3.5 (Python was seen as exotic). Our source code management system was VSS, we ran to 3 to 6-month release cycles with onerous weekend releases, and we deployed manually onto physical servers hosted in the next room. It wasn’t quite the mainframe age, but it felt like it.
The technological transformations of the following 15 years have been phenomenal. Today internet connections at home run at high and ultra-high speeds and 5G promise the same on the road. Shenzhen, a tech city of 20 million citizens appeared out of a fishing village. Moore’s Law has held and the dawn of Quantum computing appears on the horizon. At the same time, from the petty theft of wallets to the stealing of national secrets and the destruction of key national infrastructure, crime migrated from the physical into this new cyberspace.
I was sucked into a boutique security consultancy, passed CREST examinations, tested web, mobile and infrastructure estates in finance, telco and government sectors. A Hacker for hire – charged at a day rate, working with legends such as Albinowax, Alex Chapman and James Forshaw. The result was an endless roll of reports filled with terms like “SQL Injection”, “Cross-Site Scripting”, “Information Disclosure” and the dreaded “Remote Command Injection.” As well as regularly and legally breaking into banks, the best thing about working for a boutique consultancy is the diversity of work – we were once asked if we could penetration test an Ambulance (the answer was yes, the prereqs were the vehicle and keys), and I am glad to see they continue their good work doing fun projects like Cost-Effective Drone Detection and CBEST projects.
But one thing bothered me. Agility in software engineering was no longer cutting edge, but mainstream, and as ideas like DevOps took hold, as infrastructure became code, as Cloud computing became ubiquitous, the speed of development increased well beyond the capability of one or two testers given 5 days to assure a system was free from vulnerabilities once or twice a year.
That is the reason I decided to join the growing Shift Left movement, and try to embed software security testing into the Software Development Lifecycle (SDLC). Technologies like Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) were exciting ways to bring security knowledge and techniques into the hands of the people who actually wrote the code – the developers. Shifting the discovery of vulnerabilities early into the process of making software saves time and saves money, the same way that finding a defect in a clay pot is easier to fix before you put it in the kiln. More importantly, using automated techniques, a developer can scan code every hour of every day, matching security testing speed with the speed of development.
I was recently reading Language Unlimited by David Adger which has an interesting experiment: write a long sentence that fills a line on the page. Now google it in quotes. Chances are that it will not be found. You are the first person to write this string of words: sentences almost never seem to reoccur. Computer languages are more structured, but only marginally so. As long as people are writing code, this code will continue to be as unpredictable as we are. Automation will always be waiting to codify rules to detect new vulnerabilities, but they will be waiting for that first human reporter. Sometimes it simply takes a human to find a bug introduced by a human.
How to bridge the gap then, how to enable human testing at the speed of the SDLC? For me, the answer is clear, the answer is Bug Bounty and Vulnerability Disclosure programs, what we at HackerOne call Hacker-Powered Security.
The Open Source movement was based upon the idea that “in the eyes of many, all bugs are small”. Bug Bounty programs make all security vulnerabilities small. Managed correctly, we can achieve continuous security. Hacker-Powered Security has been around since the birth of the internet, and once the preserve of technology companies like Google and Yahoo, is now being used by industries as diverse as Finance, Retail, Gaming and the National Governments. Hacker-Powered Security finds bugs fast, and with less effort, so that security and engineering teams can focus on what is important – reducing risk and launching great products!
Organisations use Hacker-Powered Security to empower our citizens and empower our users to make our products and services safer, and their reports are published in real-time in public and private Hacktivity feeds. They also demonstrate that the earth is flat. Tallent is not exclusive to cities like Boston and London, and not exclusive to professionals with long acronyms following their names. Who would have known that the first hacker to reach $1,000,000 in bounties would be a young man not yet 20 years old in Buenos Aires? Who could predict that the number one hacker for the Singaporean Government was a 24-year-old military reservist? Hacker-Powered Security acts like the football scouts scouring the football pitches of the world, looking for the next Ryan Giggs of Cyber Security.
Any company who delivers services over the internet should have some form of Hacker-Powered Security. The future is open, collaborative and secure. And that is why, 2 years ago today, I joined HackerOne!