Knowing that something has gone wrong is pretty valuable. Companies and software teams spend lots of time, energy, and other resources trying to find what has gone wrong and where the problem is.
But as developers and QA engineers know, sometimes all you get is “the app crashed when I tried to click on a button.”
It’s basically like getting this photo, with a complaint that the boat isn’t working. From this limited context, we can see the ship is old and rusty, with a lot of problem areas that could be fixed up — but which one is the real problem?
Getting the report of an app crash caused by a button might be enough information to start your own investigation, but it’d be far more helpful to have some more context.
- Which button?
- What was the user doing before clicking on the button?
- What other applications were running?
- Cellular, wifi, or no data connection?
- What other actions cause similar crashes or exceptions?
There are many other pieces of information that help diagnose the root of the problem, each one adds a little more context and gets you closer to figuring out the real cause.
There are two important types of context for software errors: information about the application environment and information about what the human being in the real world who was using the application.
Information about the application environment is the kind of logging and filtering of data that a bug capturing tool like Airbrake provides.
Back to our broken ship; let’s get a little more context:
We now know that the ship isn’t working because it’s stuck on dry ground on a beach. It might even be possible to trace what course it took to get there, which navigation systems were running, and what impact that rusty areas may have had.
Once Airbrake is part of your app, it captures errors and exceptions, so you know what was going on in the background when a problem crops up. With aggregation, you can also see when errors are coming up in certain browsers, certain users, or servers.
You can see which parts of your app were running, backtrace to see everything that happened up to the point of the error, and in general get the nuts and bolts context of what’s going on.
Using a crowd of human testers provides the other kind of context for really effective bug capture, tracking, and fixing. test IO’s platform makes it possible to get real people checking over your app or website under real-world conditions.
To compare it with our shipwreck, it means you zoom out again and see how people similar to your users interact with it.
Did they expect the ship to be in the water? Do the rust spots make a difference at all?
When you utilize the power of the crowd to do exploratory testing, human testers can provide you with much-needed context about the types of errors that surface in normal usage and even point out issues that don’t show up in capture tools like Airbrake because some issues are tied to user expectations, specific kinds of usage, and other assumptions people have when interacting with software products.
By combining these two types of bug capturing, you get very different but equally helpful kinds of context for software development and fixing bugs. With the granularity provided by Airbrake, you can even pinpoint the exceptions discovered by testers and check the backtrace when you get their detailed bug reports. That means developers have the advantage of both a human-written document with screenshots or screencasts about the bug, as well as being able to see all the details of the application environment and backtrace in Airbrake for that particular bug report.
In the end, it’s all about getting enough information to your development team to empower them to make the right decisions and work effectively. Both test IO and Airbrake are striving to make software development better, they both do it by showing as much of the big picture to developers as they can.