Bing Is the Buggiest (But Second-Best) Search Engine, Say Software Testers in uTest Report

9/14/09Follow @wroush

(Page 2 of 2)

“overall accuracy of search results.” In this category, Google again came out on top, with 90 percent of testers rating it excellent or good. Google Caffeine came in second, with 83 percent rating it this highly, followed by Yahoo with 53 percent and Bing with 42 percent.

But when asked to rank the three major search engines in terms of their overall accuracy, the testers put Bing in second place, ahead of Yahoo. And when it came to “best real-time relevance” and “fastest page load speed,” the testers ranked the three engines in the same order (Google, Bing, Yahoo).

I asked Johnston exactly what a uTest community member might consider a “bug” when it comes to a public-facing Web application like a search engine. He explained that uTest testers classify bugs as technical, functional, or user-interface-related. A user-interface bug might simply be a link that was supposed to be blue that came out purple. A functional bug might be an unexpected behavior—like being sent to the wrong page. A technical bug would be a more serious problem like a browser crash.

Interestingly, Johnston said that while the search engine testers found more bugs than in past battles, the bugs “skewed toward the lower end of the severity scale,” meaning that for the most part, all of the search engines work well. Johnston says the testers did not find a single bug relating to security or privacy—for example, mixed-up accounts or preferences settings.

And it’s important to remember, Johnston said, that uTest testers aren’t your average Web surfers. “When the average person is using a search engine, you’re trying to get it to work,” he points out. “When you’re testing it, you’re trying to get it to break.”

The bug battles are “an excellent community engagement tool” for uTest, says Johnston. “Certainly, the prize money matters, but I think the bragging rights are almost as important as the money. It’s an extension of the uTest reputation system. And it’s a great chance for new testers to break in and show us what they can do.”

Not incidentally, Johnston says the bug battles demonstrate the power of uTest’s crowdsourcing model. “It’s a real-world demonstration of what the community can do. We think it’s fascinating to see how quickly you can mobilize a community of professional testers and point them at an application of any type and bring that much coverage to bear.”

After past contests, Johnston says, the vendors whose applications were tested— Microsoft, Mozilla, and Google in the case of the browser battle, and LinkedIn, MySpace, and Facebook, in the case of the social-networking battle—have often reached out to uTest to ask for the detailed results, so that they could address the bugs directly.

“They took it in the spirit in which it was intended,” Johnston says. “We’re not picking on these companies or their products. We are vendor-neutral, and we think this is as honest a real-world comparison as we can set up.” He says uTest would be happy to share the results of the search engine battle with Microsoft, Yahoo, and Google.

Wade Roush is a contributing editor at Xconomy. Follow @wroush

Single Page Currently on Page: 1 2 previous page

By posting a comment, you agree to our terms and conditions.