Skip to content

Local search and automated rating site Grayboxx announced a series of upgrades today, meant to enhance and expand the site as well as infuse some personal flavor (read: trust) to its core rating features.

As we’ve written the company’s “implicit reviews” pull together disparate information sources to rate local businesses. The upside is that its algorithms base quality scores on factors across many different categories and locales, alleviating a perennial challenge of generating content outside popular categories (i.e., restaurants in New York).

The downside, however, is that the automated nature of the ratings misses out on the personal flavor and context that has been behind a great deal of the growing popularity of user reviews in local search. Indeed, this “social local search” is an offshoot of the larger phenomenon of social networking. Its appeal is correspondingly grounded in a certain degree of social interaction, which automated scoring of businesses largely lacks. Add the fact that Grayboxx can’t reveal its secret sauce (how exactly it comes up with these ratings), and there can be misgivings about the veracity of the scores.

That’s where today’s announcement comes in. In addition to expanding the number of data sources aggregated to come up with its preference scoring, it has also infused user-generated review functionality. This will come in the form of aggregated reviews from Yelp and Citysearch, as well as an upcoming feature for users to write reviews directly into Grayboxx. This should help personalize the experience more so it doesn’t rely only on the merits of its automated ratings.

Today’s announcement also includes an official nationwide launch. Before this, the company has slowly rolled out in a number of mid-sized U.S. cities and attempted small, localized publicity events for each. Now that it is on a nationwide scale, it should be able to learn more about its own model and find out where it best “fits.” Founder Bob Chandra believes this niche is cities with populations from 100,000 to 1 million — those that the Yelps and Citysearchs of the world aren’t serving as well.

Overall, the company’s main goal going forward should be to continue to add sources of flavor and context to alleviate the challenge of a lack of personality in its core rating system. After a suggestion that video could represent an opportunity to add this “color,” Chandra replied that it could be in the site’s long-term future but nothing is on the books currently.

As we suggested in a past interview with Chandra, he’s also now thinking more seriously about licensing out the preference scoring to IYPs or other local search destinations that are interested in rating local businesses with more breadth. We’ll have to wait and see if prospective partners — and more importantly, users — bite on this promise.

This Post Has 3 Comments

  1. I don’t get this site at all and seems strange that nobody is critical of the notion of some sort of secret algorithms for ratings. Algorithms work for Google because the results are self evident. This company just want to put a score besides a business and we are suppose to believe it. Are users really that simple and stupid?

  2. Thanks for the comment Troy. That is exactly the mistrust that many users have with the site. I believe the company when it says it can’t reveal this secret sauce for competitive and other reasons. But it has to realize that this secrecy is going to cost it some users, and that it has to do everything in its power to elicit trust and a warm fuzzy feeling in other ways (i.e. user reviews and video mentioned in the post).

    Chandra also supports the automated ratings with the assertion that the company has compared its ratings against the reviews of sites such as Yelp and Citysearch and seen favorable (equitable) results. Those results aren’t public either so your skepticism is fair. Again this secrecy is a double edged sword for the company — being the source of its strength and the source of a considerable degree of user mistrust. Your comments represent a significant faction of users and are well taken.

  3. Just so I understand…. These folks apply some pretty pyrotechnical algorithmic work, all behind the scenes, to approximate what Citysearch and Yelp have just by averaging their user ratings? I’m all for elegant algorithms, but not when there are equivalent, lower complexity, transparent proxies for decision making. Also, it seems to me that they are focused on a really interesting, worthy, difficult problem to solve… that isn’t yet really a problem. We still live in a world where just finding the right plumber–one who is local, bonded, works on radiant heat piping, and available–is a challenge. In other words, consumers’ complex decisions still aren’t well served by the discovery models we’ve provided to them, which suggests that ratings-based comparison technologies (read ranking) are helpful but also early and anticipatory….

Leave a Reply

Back To Top