I’m uTest’s 2012 Mentor of the Year!

For the past 3 years, uTest recognizes uTesters who have consistently gone above and beyond their call of duty. uTest recently announced their selections for the 2012 testers of the year and I was selected as the 2012 Mentor of the Year!

Wow! What a thrill!

As I’ve mentioned many times, uTest provides us testers with many opportunities to grow and develop our testing skills. We are constantly exposed to new products, devices, and customers. The uTest forum always keeps us up to date on the latest testing trends and hot debate topics. But uTest offers us more than opportunities to learn; uTest also provides a platform for us to teach and mentor.

My greatest thrill comes when uTesters comment on how one of my posts helped or inspired them. It’s the motivation behind everything I write. It’s a privilege to be able to influence new uTesers as they evolve into highly-skilled and respected testers.

uTest has assembled a community of testers ready to learn, but that need must be met by those willing to teach. Every tester has knowledge they’ve gained through study and experience. No matter how simple it may seem, that information is valuable. If you’re brave enough to share what you have learned, you’ll experience the amazing feeling of knowing you are positively impacting your community and industry.

I am truly honored to receive this award and I want to extend my sincere thanks to the uTest team and the uTester community.

If you care to read any of my “uMentor” posts, they are all located here.

5 Ways to Improve Your Bug Titles

I originally posted on the uTest forum here.

Bug titles are one of the most important pieces of you bug report. They are the face of your bug, they show the its value and can help or hurt the overall efficiency of the test cycle. Far too often testers don’t give their bug titles the attention they deserve. This post will try to change that. Here are 5 tips to help you improve the titles of your bug reports.

Consider Your Audience

Like the bug report itself, the title is intended to convey information. The main difference is the title is more concise. A well written title will quickly and clearly summarize the bug and its value.

To communicate this information effectively, you need to consider your audience. Bug titles are read by different audiences who may use the title for different reasons. Testers have the difficult job of writing a title that satisfies the needs of two different audiences at the same time: The customer and your fellow testers.

Customer
When the customer or Test Team Lead (TTL) reviews the bug list, one of the first things they do is look at the title. As we talked about in Reporting High-Value Bugs – Part 2, part of reporting high-value bugs is “selling” it to the customer. The title of your bug is part of your sales pitch. Always keep the title short and to the point. You want to focus on the end result, not the actions. For example:

Use “User profile – Unable to link to Facebook” instead of “Clicking the ‘Link to Facebook’ button doesn’t do anything

Also use words that action words that convey importance such as ‘prevented’, ‘does not’, ‘inconsistent’, ‘unexpected’ etc.

Fellow Testers
Your fellow testers use the title of your bug in a very different way. They use it to determine if the bug they found has already been reported. To help them, you need to include the key words they will be searching for.

Hopefully, before you report your bug, you search the bug list see if it has already been reported. Make a note of what you searched for because those are the words you should consider including in your title.

In Reporting High-Value Bugs – Part 2 we also talked about reporting the root cause of the bug. The same is true for the title. Your title should describe the underlying problem, not one of its many possible symptoms.

Follow the uTest Standard

uTest has a crash course dedicated to Bug Title standardization so I’m going to point you there first: http://help.utest.com/testers/crash-courses/general/bug-title-standardization

To summarize that post, every bug should be broken apart into two distinct parts. The “Area” and the “Description” The area is the place in the application where the bug occurs. The description is a brief summary of the bug. These two areas should be separated by a hyphen.

For example, in this bug title:
Homepage – The ‘Contact Us’ button is linking to the incorrect page
“Homepage” is the area and “The ‘Contact Us’ button is linking to the incorrect page” is the description

This can get a little tricky when the area is deep in the application. If there was a bug in the uTest platform on the payments screen in the Account & Settings section how should we identify that area?

In the link above, one of the authors suggests you write it like this:
Account & Settings – Payments – Total payout amount is incorrect

Personally I don’t like this suggestion. Testers who do this tend to put the navigation steps in the bug titles. That is not the place for that information. Plus having more than two sections makes the title difficult to read.

I prefer to list only the broad area of the application and include the more specific area in the description. Here is how I would write this title:
Account & Settings – The total payout amount on the Payments page is incorrect

Do Not Specify the Test Environment

Many testers include the device or environment they use to test in the title of their bugs:
[iPhone 5] User profile – Unable to link to Facebook

The landscape that we test against these days is so large that it’s no wonder that this has become more common recently. Testers feel that the device they found the bug on is an important piece of information. While that is true, the title of the bug is not the right place for it.

The main reason that this is a bad practice is because it gives false impression about the scope of the bug. Generally when testers start their bug title with the environment, they are simply stating the device that they found the bug in. But the customer may interpret that to mean that this bug is only present on that device listed in the title.

Unless you have tested against every other possible device/environment, don’t include this information in the title. It adds little value and can actually cause problems.

As with most rules, there are exceptions. Here are two:

Explicitly required
If the cycle specifically tells you to include environment information in your bug titles, you should follow the instructions.

For example, this is directly from a test cycle I recently was on

NOTE – If you find an iPad bug: Please add [iPad – iOS xx] at the beginning of you bug title.

In this situation it is perfectly fine (and even required) that you include the environment in your title.

However, you may see something like this in the instructions:

BUG TEMPLATE: Please include the following info in all your bugs: Mobile device model and OS version Description of bug Wi-Fi or 3G / 4G?

This does not mean that all this information should be in the title. It simply means that it should be specified in the body of the bug. Generally you should put this information in the ‘Specified Environments’ or ‘Additional Environment info’ fields. It is the “Bug” template, not the “Bug Title” template.

Environment specific bugs are allowed
Occasionally a cycle will allow the same bug to be reported for different environments. In this case, each one of these bugs is considered different by the customer. Since the only difference between the bugs is the environment, it is necessary to include the environment in the title. Otherwise you would have multiple bugs with the exact same title and your fellow testers would have to look at the contents of the bug to see which environments had already been reported.

Keep Consistent with Earlier Bugs

Sometimes a cycle will ask you to include some extra piece of information in the title. One example of this would be the build of the application that you tested. What I usually see happen in these situations is every tester comes up with their own way of including this information. The result is a messy bug list that looks something like this:

[b 123] Area – Description
build 123 => Area – Description
Area – Description {build 123 v.2.045.34}
123 Area – Description

This is difficult for the customer and TTL to read and makes it impossible for them to quickly scan through the list.

Assuming that the earlier bugs followed the uTest standard and everything we addressed above, you should follow the pattern established in the first few bugs. Don’t worry about being original or sticking to your own personal preference, the goal is consistency. This will make the customer’s and TTL’s jobs much easier. See how much easier this is to read?

[b 123] Area – Description
[b 123] Area – Description
[b 123] Area – Description
[b 123] Area – Description

Learn From Other Testers

You can learn quite a lot from reviewing the bug reports of your fellow testers. You can see different styles of reporting the reproduction steps, come up with new ideas of how to test, and see which types of devices are the most common.

The same can be said for the bug’s title. When you are reviewing bugs, don’t just skip over the title. Instead, take advantage of the opportunity to learn from the mistakes and successes of others.

First evaluate the title of the bug first on its own:
Does the title follow the standard? Does it include appropriate key words?
Then look at it in the context of the entire report:
Does the title accurately and efficiently summarize the bug? Does it “sell” the importance of the bug?

As you pay more attention to your own bug titles as well as the titles of other bugs, you will start to see the types of patterns we have just talked about. It will become apparent that the testers who do these types of things are the ones that are separated from the crowd. Bug titles are extremely important and should be treated that way. Keep these tips in mind and you will be one step closer to writing the perfect bug report.

I’d love to hear your thoughts on this topic. What other tips can you give your fellow testers?

Is uTest a Scam?

There are some reviews out there from testers claiming uTest is a scam and that you can’t make any money. I’ve also seen a few uTest customers complain about the quality of the testing and uTest’s sales/negotiating practices. These concerns are valid and I can understand why some people have the impression that uTest is not what it claims to be. In order to address these perceptions, we need to look at them from two points of view; The view of the customer, and the view of the tester.

The uTest customer

Complaint #1: The quality of the testing was lower than expected

I have no problem saying that there are a lot of bad testers at uTest. It’s true, there is no point in denying it. These testers only report low-value bugs, they don’t follow instructions, and generally disrupt the test cycle. Sometimes it is because they’re inexperienced testers, sometimes they’re just plain bad. Unfortunately this is one of the downsides of crowdsourcing. To uTest’s credit, they do realize this and are continuously working to improve the skills and abilities of uTesters. They also identify and remove problem testers.

On the other hand there are some absolutely awesome testers at uTest. These top testers consistently provide the customer with excellent service and high-value bugs. uTest does a pretty good job of identifying the strong testers and ensuring that they are the ones working on your projects. Keep in mind that there are literally hundreds of projects, dozens of Project Managers and thousands of testers from all over the world, so every test cycle is going to be different.

So what’s a customer to do? First you need to manage your expectations. Understand the limitations and benefits of uTest and make sure they align with your testing needs. Second, you need to be involved. Yes, uTest is a service, but the quality and success of your project is a direct result of your participation and influence.

Here is an excellent article from Elena Houser on how customers can get the most from their uTest (or any crowdsourced) testing service. It is a MUST read for any potential uTest customer: http://trancecyberiantester.blogspot.com/2012/10/crowdsourced-testing-lessons-learned.html

Complaint #2: The uTest sales process is shady

In full disclosure, I’m not a uTest customer so I’ve never gone through this process myself. However, I am a uTest TTL (Team Test Lead) and so I have worked with many different customers. I’ve seen customers who are extremely satisfied and those constantly complain. It doesn’t take long before you start to see a pattern.

I really could just do a copy/paste from above. Again, this comes down to having correct expectations and being involved. Customers who follow Elena’s advice will find that uTest’s services are well worth the money and effort. Those who don’t will be disappointed with their results.

Another point worth mentioning is that uTest is a start-up company. They have only been around a few years yet they are growing and changing incredibly fast. In just the last year I’ve seen some impressive improvements. I have no doubt that as the company matures and customer needs and expectations are better understood, the sales experience will improve and mature as well.

The uTest tester

Complaint #1: uTest is a scam

As I mentioned above, uTest is a start-up. The company grew faster than many people expected and as a result it went through some obvious growing pains. Everything about the company was (and still is) evolving. The payment process was still being worked out, bug reporting and evaluation was confusing, and in general the tester experience gave some the impression that uTest was either a scam or just unprofessional.

Admittedly, the first tester interface was terrible. It was slow, buggy, and difficult to use, which is quite ironic for a software testing company. This problem has now been addressed. uTest recently launched their new tester platform and it is so much better (read more here). Their are now several reliable ways for testers to receive payment, and there is an entire team of employees solely dedicated to the welfare of the testers. These are just a few examples of how uTest is working to improve its image and show testers that uTest is a legitimate company and a great place to work.

Complaint #2: You can’t make any money

I recently read a review from a uTester that he had reported 87 bugs but he only was paid for 16 of them. The other 71 bugs were rejected. He felt that bugs were intentionally rejected in order to avoid paying the testers. He’s not the only one to complain that testers are not adequately compensated for their efforts. Fortunately it is because of a few misconceptions.

Testers need to understand the uTest bug payment model. Customers pay uTest a set price for an agreed upon amount of work. It is up to the customer to accept or reject the bugs reported by the testers. uTest then pays the testers for the bugs (and other work) the customer accepted. Since the customer pays a flat fee no matter how many bugs are reported or accepted, they have no financial incentive to reject individual bugs. Bugs are rejected for valid reasons, not to avoid paying for them.

The other important point is testers are not paid for their efforts (there are some exceptions), they are paid primarily for the value they provide. Testers who provide the customer with high-value bugs make a lot of money. Testers who report low-value or “junk” bugs make very little money.

The bottom line is good testers can make good money working at uTest. Poor testers will be frustrated.

Conclusion

uTest is not a scam. It is a legitimate company and an amazing one at that. While uTest is not perfect, most criticisms can be answered if you look at the entire situation objectively.

Reporting High-Value Bugs – Part 2

I originally posted on the uTest Forum.

In Part 1 of this series, we talked about the reasons why a uTester should focus on reporting high-value bugs. That led to some fantastic discussion and a spin-off thread about reporting every bug you find. Before you continue, you should go back and review those threads to get caught up on the topic.

In this Part 2, we are going to look at “HOW” you can find and report high-value bugs. This is a popular topic at uTest and there are many threads, webinars, and crash courses available (There are links to some of that material below). This post is intended to complement those resources and help us continue to improve our testing skills.

I’ve teamed up with fellow TTL and uMentor, Allyson Burk for a double dose of testing goodness :) We have some great ideas for you, so let’s get started!

Finding high-value Bugs

Focus on One cycle at a time (Allyson Burk)

I find there are two approaches to the workload at uTest: 1) accept every cycle, file a few bugs on each or 2) accept fewer cycles, file more bugs per cycle. Personally, I find the latter to be the best way to make more money, have more satisfaction in my work and increase my tester rating. Why? Because I can increase the quality of my work using this approach.

Giving myself more time on a product allows me to be methodical. I might use a few different approaches depending on the type of product.

Deep, power user scenarios. I develop a goal in mind. A recent cycle I was on had a great example of this – you are a soccer mom and you need to equip your child for the upcoming season. This is going to yield the issues that are going to affect the target audience of the client. This approach can definitely yield high value bugs because you will be able to tell the client what is going to drive those target customers away.

Break down the app into areas and dig deep. This is the approach I use when it is a newer, more unfamiliar application. I might spend a few hours in settings making sure each setting combination is functioning properly; or trying a variety of shopping cart, wishlist, checkout scenarios; or product customization. The key is not just spot checking to see if the area is functioning, but to really stretch the code and make sure all variables have been covered.

Going down the rabbit hole. This is a less precise, more intuitive path where I just start investigating the parts of the application that I find interesting and following them as far as I can take them. If I really love the app or find it to be fun to use, this is the approach I will take. You have to be careful with this approach because you can “waste” a lot of time.

The key to all of these approaches is TIME. You cannot test in this deep manner if you do not have time and you cannot have time if you have 5-15 active cycles clamoring for your attention.

(Note from Lucas)
When you accept a new cycle, you are expected to thoroughly read the scope and instructions, read through the known bug list, review any other attached documents, and catch up on any chat posts. Then you have to set up your testing environment. You have to install the app, create an account, configure your proxy, etc. These start-up activities can be quite time consuming. Keeping your active cycles low allows you to spend less time getting ready to test, and more time testing.

Know the status of a project (Allyson Burk)

In general, clients are going to value bugs differently depending on the point in the development cycle they are on. It is important to pay attention to clues about where the client is in development when searching for high value bugs. This can be a moving target depending on the methodology used, agile vs. waterfall for example, but I think for this conversation we can think in terms of early, middle and late in the development cycle.

Early in the development cycle, you can imagine that content related bugs are not going to carry huge value. The look and feel may still be in development, the final copy is likely not completed and images may not have been delivered. The client is rather going to be more focused on core functionality. They need to make sure the major functionality is there and working properly.

Midway through the development cycle, functionality is still going to be the focus, but content starts to be more important. If ever there was a time to value spelling/grammar bugs, this would be it. Most copy has to get locked down for legal/translation/marketing/etc. so the client may be looking to make sure this is completely clean before shipping it off for various approvals.

Late in the development cycle, stability and polish are key. Everything needs to be functioning at this time and the application needs to have a minimum of crashing/blocking issues. Many times in this last stretch before release of a product, the client might only be interested in High or Critical issues. The code will be fairly locked down at this point. The client will often not want to risk fixes that might break other functionality, so they are really going to be interested only in bugs that are of such severity to make the app unusable.

As uTesters, I think the trickiest aspect of this is knowing what phase of the development cycle the client is on. Logic might dictate that if you are on the first cycle for a new client that they would be early in the development cycle. I’d actually venture a guess and say that is actually almost never the case given my experience. I’d say we are usually brought in after the code is pretty stable and the content is beginning to be finished… somewhere in the mid stages.

But how can we know with more certainty?

Sometimes, this is as easy as reading the overview and paying attention to context clues. The PM might explicitly state, this is the first testable build of this product (early) or this is the release candidate (late). There may be things excluded from the scope like images (early to mid). There may be a very long known issues list (mid to late) or no known issues at all (early or late – HA this is a tricky one! They may clear all known issues for the later builds in order to make sure there has been no code regression before shipping the product out).

In the end, we will have to rely on the information provided and forge ahead. There is also never any harm, if you feel that there is no clear focus provided, to ask the question: Is there anything in particular the client is wanting us to focus on at this time? You might be refreshed at what avenues of testing that will open up for you.

Writing High-Value Bug Reports

Report bugs, not symptoms (Lucas Dargis)

The other day I was the TTL of a cycle and one of the features in scope was an account creation screen. The user was required to enter several pieces of information including their Address. Two different testers report these two bugs:

Bug 1 – Address field allows “!!!!!!!!!!!!!!!!!”
Bug 2 – Address field allows “!@#$%^&*()_+”

I see this type of thing all the time, so I know some of you saying “What’s wrong with that?”. The problem is both of these testers reported different symptoms of the same bug. If they had taken some time to do further investigation into the Address field, they would have realized that they hadn’t found a specific input that made it past the validation. They would have learned that the real issue was the Address field wasn’t being validated at all. The user could have entered anything (or nothing) and the system would have accepted it.

Whenever I encounter a bug, I spend a significant amount of time testing all around it, trying different inputs and different sequences of events until I  understand the root cause and all of its symptoms. This is where testers can show their worth. It’s easy to click on something and then report on the results, but it takes a much stronger skill set to be able to investigate potential bugs and then provide a valuable report of your findings. Customers can see this effort and they usually reward it.

Sell Your Bugs’s Prominence (Lucas Dargis)

If a bug is easy to find, it is usually more valuable then if it was an edge-case bug and it was unlikely that anyone would find it. Identifying your bug and reproduction steps is just the first step. The best testers know that how their bug report is written can affect how the customer views it’s prominence (how easy it is to find). The best testers keep their bug reports focused and their steps limited to the critical path. That means you should only list the specific actions needed to trigger the bug.

There is a problem with this approach. Often, bugs are hidden deep within the application and you might feel that you need to explain how you arrived at the bug. The way I get around this concern is to list “Prerequisite” steps at the top of the “Actions Performed” where I describe the starting state of the application.

Example:

Bug Title: Shopping Cart – Items added to the cart are not saved
Steps:
1. Go to the URL
2. Click on create new account
3. Enter a valid username
4. Enter and password
5. Click “Submit”
6. Log into the system with your account
7. Search for an item
8. Select the item
9. Add the item to my cart
10. View your shopping cart

The above report lists the steps from beginning to end, but it is fairly long and gives the impression that a user would have to do a series of very specific steps in order to find the bug. Instead, you should only list the steps that are directly related to the bug. Let’s see what that would look like.

Bug Title: Shopping Cart – Items added to the cart are not saved
Steps:
Starting state – User is logged into the application and viewing the details page for a product

1. Add the item to my cart
2. View the shopping cart

Explaining the starting state at the top of the report allows us to remove 8 steps. Now, because only the steps that specifically cause the bug are listed, this bug seems much more prominent and the report does a better job of highlighting the value of the bug. This is an oversimplified example but I hope you understand the point.

This is just one tip on how to sell your bug. This technique is called “Bug Advocacy” and is something ever tester should learn. To learn more about Bug Advocacy, here is a fantastic paper written by Cem Kaner:http://www.kaner.com/pdfs/bugadvoc.pdf

I want to thank Allyson for her contributions to this article. Please feel free to post questions, comments or challenges to anything we’ve written. Hopefully these ideas will prove useful to you in your quest for those high-value bugs.

Additional Resources

Be Creative: Bug-Hunting Tips from a Gold uTester (By Amit Kulkarni) – http://help.utest.com/testers/crash-cou … ld-uTester

How To Write the Perfect (uTest) Bug Report (by Rebecca Showerman and Nikki Sedgwick)- http://blog.utest.com/how-to-write-the- … t/2012/06/

How to Write a Good Bug Report (By Sunil Sidhwani) – http://forums.utest.com/viewtopic.php?f=55&t=3095

When a Bug is Not a Bug – Bugs vs Feedback (By Aaron Weintrob) – http://forums.utest.com/viewtopic.php?f=55&t=3179

Bug Reporting 101 (By Joseph Ours) – http://help.utest.com/testers/crash-cou … orting-101

2012 Year In Review

2012 was a career year for me. For the past 8 years I’ve just had a job. I didn’t really enjoy what I was doing and didn’t put much thought into how I could or should develop my career. Things changed quickly early in the year as several opportunities came together. Here are a few of those highlights:

New Jobs
I found the best job I’ve ever had, working as the principle tester at a semiconductor manufacturer. Before I arrived, there was no formal testing in the IT department. I was tasked with introducing testing in one group and then over time, grow it throughout the organization. I’ve been able to test the new “Flagship” application which is a few weeks away from our first Release. So far we have received rave reviews on all aspects of the application and the development process.

I was able to expand my testing skills by learning the nuances of SPA (single page application) testing. This has been a fun and challenging experience, mostly because it isn’t done much yet so there few resources out there geared specifically toward SPA testing.

I was also able to dabble in automation testing for SPAs. Since this type of application is client-side heavy, the true value comes from exercising it through the browser. Many automation solutions and supporters prefer testing the code directly (via APIs or a test harnesses), bypassing the browser. That has made this learning process more of a struggle for me then I had expected.

If you follow my blog at all you’ll know I’m also a uTest fan boy and freelance tester. I’ve already wrote a blog post about my uTest experiences this year, so I’ll just give an updated summary:

  • Team Test Lead
  • Became a solid iOS tester
  • Worked with and learned from testers all over the world
  • 200 cycles/425 bugs
  • 94% bug approval rate/44% high-value rate
  • Gold Rated (99.75%)

Improved Online Presence
One of the most valuable and educational aspects of this year was my decision to join and contribute to the testing community. I started this blog to chronicle my career development. I only found the time for 11 posts but I was able to post regularly… well kind of.

I spent most of my time focused on the uTest community. I became a uMentor, a forum moderator and one of the most active forum members. My topics have generated hundreds of responses and over 20,000 views. I’m now seeing more and more new uTesters step up and contribute to the growth of the forum and the uTest community which is fantastic!

I was also featured on the uTest blog a few times:

I have learned so much from writing about testing, teaching new testers, and learning from others. I’d say that focusing on developing my online presence has had the largest impact in my growth as a professional tester.

Scrum Mastery
In addition to testing, I’m also extremely interested in software development processes and improving efficiency, specifically the Scrum development framework. Since I had a few years of Scrum experience, I volunteered to be a Scrum Master for the “flagship” product I mentioned above. As word of that project’s success got around, I became a champion for Scrum in our organization. I was able to coach POs, Developers, Customers, and Managers and am currently Scrum Mastering 2 projects. I was asked to give an “Introduction to Scrum” presentation to our department during one of our Lunch & Learn sessions and since then one group has started their own project using Scrum.

To complement and improve my real-world experiences, I attended a Scrum Master course and then passed both the Scrum Alliance and Scrum.org Scrum Master certifications.

So cheers 2012; you’ve been swell. I look forward to meeting you 2013. I know you have many fun and challenging experiences in store.

Testing buzz words that annoy me

Maybe I’m just too sensitive (I have been watching a lot of romantic comedies lately), but there are certain “testing” words that really bother me. Either they are way overused or they are used incorrectly or whatever. I just felt I needed to vent :)

So, here are my top 5 testing words (phrases) that annoy the heck out of me:

It depends -When does it ever NOT depend? how is this helpful? Would you just take a stance for crying out loud?

QA -This is used way to often and it is usually used incorrectly. QA stands for Quality Assurance. Quality Assurance is process oriented and focuses on defect prevention. It is a term typically used in manufacturing. We are in the defect identification and information providing business. We are not QA people, we don’t do QA. We are testers, we test!

Sapient – Yeah, I get it. You used a smart sounding word to describe testing activities so that testers sound smart. Good one. Go away….

Heuristics – I’m still recovering from how impressed I was by sapient.

Craft – I really don’t know why this one bothers me so much but I absolutely CRINGE anytime I read or hear it. Nails on a chalkboard for me.

So, what testing buzz words annoy you?

Disclaimer:
This is intended to be a fun rant. If you use these words and I’ve offended you, my reaction would depend on several factors. Maybe you should QA this post relying on your sapient abilities. Following a heuristic, take the opportunity to hone your craft. AHHHHGGGG!!!

Improve Your External Bug Videos

If you have ever tried to take a video of a bug on your phone or tablet (Or if you are a test lead or developer trying to view them) you know it can be a challenge. If you you don’t know what you’re doing, your video can difficult to view and understand.

This is my first instructional video and it’s simply awful 🙂 Hopefully these tips will help to ensure your audience can get the full value from your videos.

One thing I want to point out. When I was using my iPhone to take the video of the Kindle, you’ll notice that my phone is in the portrait position. Videos taken in this orientation are saved sideways when you try to view them on a computer. To overcome that problem, simply make sure your phone is in landscape position when you are filming.

You can buy a Clingo stand for yourself here:
http://www.amazon.com/gp/product/B003JTHN4K/ref=oh_details_o00_s00_i00

Reporting High-Value Bugs – Part 1

I originally posted this on the uTest Forum.

Why Even Bother?

It’s no secret that a strategy to make a lot of money at uTest is to start testing the second a cycle opens, log as many bugs as fast as you can, then hope that some of them will be approved. There is little or no regard for quality or value for the customer. The cycle is viewed as a competition and it’s every tester for him or herself. You can make some good money with that strategy.

If you are a tester who uses the strategy above, and sadly I’ve seen many testers who are, this series probably won’t appeal to you. But if you are a tester who takes pride in your work and wants what is best for the customer, uTest and yourself, then you should stick around because we are going to explore some fun ideas.

Disclaimer:
Please understand that I’m not attacking new testers or testers who have a limited skillset. I’m trying to show that there is pride an honor in what we do and there can be substantial benefits to those who understand that.

In Part 1 of this series, we are going to discuss WHY testers should strive to report high-value bugs. In Part 2 we’ll talk about HOW. When I say “high-value” I’m talking about bugs that the customer approves as Very or Exceptionally valuable. I have three points to make so let’s get to it!

1. Align with uTest’s values

uTest consistently stresses the importance of quality over quantity and how we need to always consider our customers. Recently, uTest has done their part to promote this in two ways.

Rating Impact
uTest restructured the rating algorithm to make the quality of bugs have a much higher impact than the quantity of bugs. If you are interested in your rating at all, this is now the single most important factor. One high-value bug will easily offset a few rejections.

That means stop disputing low-value rejected GUI bugs. You are better off accepting a valid rejection and learning from it then you are disputing, “winning”, and not learning. It’s a hard thing to do, but I force myself to accept valid rejections even if I think I could get it approved. That sting helps me remember my mistake so I won’t make it again.

Increased Payouts
Perhaps an even more impactful change was the drastic payout increases for Very and Exceptionally valuable bugs. 1 high-value bug can sometimes pay 5 times more than a low-value bug.

uTest is constantly adjusting the system to ensure customers are getting the value they need, testers are fairly compensated, and primitive “testing” is discouraged. As uTest evolves, you will see more recognition, influence and money shifted towards the strong testers. It’s in your best interest to learn how to test professionally now. Learn to test for the customer, not yourself, and you will find that you are amply rewarded.

2. Receive more cycle invites

Occasionally, new testers message me asking how to get more projects or how to succeed at uTest. I give them some unusual advise. I tell them to ignore low-value bugs; don’t even bother reporting them. This is a very strange idea that receives some interesting responses. “But then I won’t get my five dollars!”. Yes that’s true, but that’s only half the story.

The best way to get invited to test cycles is for the PMs to know that you are a strong tester.
Think of the customer. They are most happy when they receive high-value bugs. PMs remember (and invite) testers who make their customers happy. There are over 70,000 uTesters. Why would they remember you if you report the same low-value bugs that everyone else reports?

A while back I worked the first cycle for new customer. There were 155 bugs reported. I only reported 8, but 7 of them were approve as high-value. 6 months later I received a note from the PM. The customer had another cycle starting and they specifically asked for me to be added to it. WOW! After 6 months they remembered me!

It might sound like I’m bragging here, but that’s not the point. The point is, if you only report high-value bugs, you stand out from the crowd in a big way. As a TTL I often point out these testers to my PMs so we can be sure to invite them to future cycles.

3. Become a better tester

This should be pretty obvious but maybe it’s not. If you report spelling errors all day you will become great at finding spelling errors, but you probably won’t become a good tester. If you spend your time practicing being a good tester, you might wake up one day and find out you actually ARE a good tester. I realize this is a simple concept, but so many people just completely miss it.

So that wraps up Part 1. In Part 2 we’ll look at HOW we can find and report these high-value bugs”. I encourage you to take these ideas to heart and think about your motivation here at uTest. Please feel free to ask questions or challenge these points. Like any other advice you receive, always question it and make a decision for yourself.

See you in Part 2!

STPCon Fall 2012 – My Review

I recently was able to attend STPCon in Miami. It was my first conference and I really enjoyed it. Here are some of my highlights.

People I met

I finally got to meet a few members of the uTest staff. Jessica, Matt, Mike, and Chris were all there slinging swag and preaching the uTest gospel. It was nice to get some face time with those folks. Unfortunately we didn’t have a uTest meet up. Maybe next time.

I did get to meet a former uTester, Joseph Ours. I talked with him for a while about his experiences with uTest and how he has made the transition into management and consulting. Joe is a knowledgeable and well-spoken guy. I really enjoyed getting to know him. Too bad he’s not an active uTester anymore, I would love the chance to work with him.

Hands-on vs. Lecture format

There was a track of 7 classes that focused on hands-on practicals. It seemed like a cool idea to me, so I spent the first 3 sessions in hands-on classes. While they were interesting and it was fun to test with other testers, I didn’t really learn anything. It was more like “Here’s a program, lets think of ways to test it”. I do that every day when I test with uTest. I wasn’t there to test, I was there to learn how to become a better tester.

While I can see the value of the hands-on classes for people who don’t have the luxury of contently testing new things with new people, for me it wasn’t the best use of my limited and expensive time. (I was there on my own dime)

After I realized that, I switched over to the lecture/discussion sessions and really found a lot of value there. Most of those sessions weren’t lectures but more of someone sharing their experience or suggesting ideas on how to do things better. The dialog between the presenters and the audience was also great. There was lots of idea sharing and discussions.

Favorite Sessions

Mike Lyles of Lowes gave a two part talk about how Lowes created a Test Center of Excellence (TCoE). It was extremely useful for me to see what a mature testing organization looks like and how it functions. It helped me start to develop my long term vision for my testing organization.

I also enjoyed Joseph Ours’s talk about “Redefining the purpose of software testing”. Simply put, he was making the argument that testing teams should be looked at as information providers, not gate keepers or decision makers. I have been making this argument for a while now but had been looking at it the wrong way. I thought it just was the definition of what testers should do. His explanation showed me that it actually can be an effective way to explain the value that testing provides. Testing is a service. We provide information to make informed decisions.

One lady in the audience challenged Joe saying that she is the test lead as well as the gate keeper and that in her organization it works fine. Joe and her debated a bit but she wasn’t convinced. After the presentation Joe and I chatted about that point more and realized that we need to think of roles and people separately. A testers role is to discover and provide information. A decision makers role is to make decisions based, in part, on that information. Usually those role are divided between two different people (or groups of people) but in some cases, it may be the same person.

In her case, she led the testing team, but she also had additional business knowledge and the authority necessary to be able to decide when the product was fit for delivery.

Not so great

The Vendor showcase was disappointing. There were only 10 or so booths. I talked to most of them in under an hour and spent the rest of the time trolling around the uTest booth sharing my uTest testimony with anyone who would listen 🙂

Overall

I’m glad I was able to make it down to STPCon. I got to meet some great people, learn a few things, and get a better understanding of our industry in general. It was a valuable experience and am looking forward to attending next year. Maybe I’ll see you there.