Google prepares to transform Search into a game for news junkies

In Google’s ongoing quest to become more social, the search giant has released a new feature called Google News badges that tracks what users read and allows them to share their badges with Google contacts to see what interests they have in common.

The badges bring a bit of social gaming into news reading…

“The more you read, the higher level badge you’ll receive, starting with Bronze, then moving up the ladder to Silver, Gold, Platinum and finally, Ultimate,” said Natasha Mohanty, an engineer working on Google News, in a company blog post. “We have more than 500 badges available, so no matter what kind of news you’re into, there’s a badge out there for you…”

“Your badges are private by default, but if you want, you can share your badges with your friends,” she said. “Tell them about your news interests, display your expertise, start a conversation or just plain brag about how well-read you are.”

While the badges show off what topics a person is interested in, they don’t offer information on what specific articles a user reads — that’s always left private, Mohanty said.

In a very Google+ Sparks-like addition, users will also be able to tailor news feeds directly in Google News relating to their reading interests and the badges can help users figure out just what it is they read a lot about, she said…

The badges are in their first iteration as of now, and more social features could be coming soon, she said. If a user reads a few articles on the same topic every day, it should take about a week to earn their first badge, Google said.

“Once we see how badges are used and shared, we look forward to taking this feature to the next level,” Mohanty said.

Whatever that might mean?

Readability of annual reports affects [in]accuracy of analysts

A new journal article in the May issue of Accounting Review shows that sell-side financial analysts expend greater effort to generate earnings forecasts of publicly traded firms with less readable 10-K filings. This increased effort by analysts results in earnings reports to investors that contain more information—but less accuracy and greater uncertainty.

Required annually by the SEC, 10-K reports provide a comprehensive overview of a company’s business and financial condition and include audited financial statements.

“Given the difficulty of following firms with less readable disclosures, analysts who choose to follow these firms likely exert greater effort to do so,” said Reuven Lehavy…. “On the one hand, lower readability of firm financial disclosures can increase the cost of processing the information in these disclosures and, therefore, can increase the demand for analyst services.

“On the other hand, less readable disclosure can increase the costs of analyst coverage. That is, analysts bear greater information-processing costs and higher private search costs for this information, which can lead to less accurate forecasts.”

Lehavy and Ross School colleagues Feng Li and Kenneth Merkley measured the readability of more than 33,700 observations of 10-K filings from 1995 to 2006. Using the Fog Index developed by the computational linguistics literature, they were able to determine the written complexity of 10-K reports by counting the number of syllables per word and number of complex words per sentence.

Through statistical analysis, the researchers found that more financial analysts follow firms with less readable 10-K reports (which suggests a greater demand for analyst reports for these firms); these analysts take, on average, two days longer to issue a first forecast revision following a 10-K filing (which suggests more effort put forth by them); and provide earnings forecasts that result in proportionally higher firm returns associated with their reports (which suggests that investors find these analysts’ reports more informative).

However, the study shows that analyst earnings forecasts for companies with less readable 10-K reports have greater dispersion, are less accurate and are associated with greater overall analyst uncertainty.

AFAIC, yet another reason to cast a dim eye on such reports. Poisonally, I’m happy enough with raw data and analyzing it myself – or following the work of someone who did the same. Futzing around with reports generated under the premise of confusing the government, satisfying the minimum intent of reporting regulations – is a waste of time.

Google adds option to block sites from search results

Google dealt another blow to [sleazy] Web site owners Thursday when it gave users the option to block certain domains from search results.

Going forward, if you conduct a Google.com search, click on a link, but then return to Google, the search giant will include a “Block all xyx.com results” link underneath the site you just clicked. Once clicked, Google will display a message that says, “We will not show you results from xyz.com again” with the option to undo or manage blocked sites.

Once you’ve blocked a domain, you won’t see it in your future search results,” Google said in a blog post, though “the next time you’re searching and a blocked page would have appeared, you’ll see a message telling you results have been blocked, making it easy to manage your personal list of blocked sites…”

Blocked sites will be tied to your Google Account, so you have to be signed in to confirm a block…

Google said Thursday that it is “not currently using the domains people block as a signal in ranking, [but] we’ll look at the data and see whether it would be useful as we continue to evaluate and improve our search results in the future.”

Bravo! Though I expect usage to be broadly defined, e.g., political, social, emotional – the opportunity to block spam sites warms the cockles of my heart. That includes otherwise potentially useful sites that work hard at thwarting pop-up blockers. 🙂

Just tried to use it; but, it’s not working in Safari, yet. Tried it in FireFox – and the result is up top.

Google alters search algorithm to avoid promoting spam sites

Google has announced a change to its search algorithm that reduces rankings for low-quality sites.

The changes, implemented in the last few days, impacts about 11.8 percent of Google’s queries, Google’s Amit Singhal and Matt Cutts wrote in a blog post. The duo defined low-quality sites as those that are a “low-value add for users, copy content from other websites or sites that are just not very useful.”

“At the same time, it will provide better rankings for high-quality sites—sites with original content and information such as research, in-depth reports, thoughtful analysis and so on,” they wrote.

Singhal and Cutts did not provide too many details about what this algorithmic change entailed; search engine ranking mechanisms are often closely guarded secrets. But they said this week’s change did not rely on changes it received from its “Personal Blocklist” Chrome extension. That tool, introduced last week, lets Chrome users eliminate Google search results from dubious domains. Google did, however, compare the Blocklist data it has gathered with the sites identified by the algorithm, and found that user preferences are “well represented” in the new algorithm.

“If you take the top several dozen or so most-blocked domains from the Chrome extension, then this algorithmic change addresses 84 percent of them, which is strong independent confirmation of the user benefits,” Singhal and Cutts wrote.

Google acknowledged that any change to its algorithm will affect the rankings of sites. “It has to be that some sites will go up and some will go down,” they wrote. “It is important for high-quality sites to be rewarded, and that’s exactly what this change does.”

Time will tell – to use a trite phrase – but, Google’s efforts to stem the flow of dross from the Web to our personal cpu’s is an useful step. There is little in the history of international commerce – especially media-driven commerce – to suggest that there are more than a very few individuals and companies willing to put quality above quantity.

Especially when the results of those decisions are measured in coin of the realm.

FBI hunting for 3 men potentially linked to 9/11 attacks

The FBI is hunting for three mysterious Qatari men who might have ties to the September 11, 2001, terrorist attacks, a former law enforcement official with direct knowledge of the investigation said.

The men’s stories and identities — Meshad al Alhajri, Fahad Abdulla and Ali Alfehaid — were revealed in a recent WikiLeaks online document dump that included a February 8, 2010, U.S. diplomatic cable spelling out discussions from a U.S. meeting in Doha, Qatar. That meeting was led by Mirembe Nantongo, then and now the deputy chief of mission at the U.S. Embassy in Qatar. It was held more than eight years after the 2001 attacks for “watchlisting purposes,” according to the document, suggesting that those named should be put on terror lists…

According to the cable, the three flew on British Airways on August 15, 2001, from London to the United States. Over the next nine days, they visited New York (including the World Trade Center and Statue of Liberty) as well as Virginia and Washington, with one stop being the White House.

On August 24, 2001, al Alhajri, Abdulla and Alfehaid flew to Los Angeles, where they stayed in a motel. The three paid for the room in cash and told the motel staff they did not want housekeeping service, the ex-law enforcement official confirmed. When they checked out, they left behind pilots’ uniforms, paperwork containing pilots’ names and cardboard boxes that were addressed to Syria, Israel, Afghanistan and Jordan.

Booked on a September 10, 2001, flight from Los Angeles to Washington, their flight, as well as hotel room, was paid for by “a convicted terrorist,” according to the WikiLeaks-released cable.

But the three men never showed up for the flight.

Had they, the three would have flown on American Airlines Flight 144 — the same plane hijacked the next day and flown into the Pentagon in northern Virginia.

Uh-huh. Timely discovery. Only ten-plus years after the fact.

Japan’s newest well-qualified police dog


A “Peach” of a police dog
Daylife/AP Photo used by permission

Meet Japan’s newest police dog — all 3 kg of her.

In what is a first for Japan and perhaps the world, a long-haired Chihuahua named “Momo” — “Peach” — passed exams to become a police dog in the western Japanese prefecture of Nara.

The brown-and-white, perky Momo was one of 32 successful candidates out of 70 dogs, passing a search and rescue test by finding a person in five minutes after merely sniffing their cap.

“Any breed of dog can be entered to become a police dog in the search and rescue division,” said a Nara police spokesman…

Momo will be used for rescue operations in case of disasters such as earthquakes, in the hope that she may be able to squeeze her tiny frame into places too narrow for more usual rescue dogs, which tend to be German Shepherds.

Good for you – little Peach.

Frog the size of a pea discovered in Borneo

Microhyla nepenthicola, which was named after a plant on the island, is the smallest frog discovered in Asia, Africa or Europe.

Adult males of the new micro-species range in size from 10.6 and 12.8 millimetres, according to the taxonomy magazine Zootaxa.

Indraneil Das of the Institute of Biodiversity and Environmental Conservation at the Universiti Malaysia Sarawak said the sub-species had originally been mis-identified in museums.

“Scientists presumably thought they were juveniles of other species, but it turns out they are adults of this newly-discovered micro species,” he said.

The tiny frogs were found on the edge of a road leading to the summit of the Gunung Serapi mountain in the Kubah National Park in the Malaysian state of Sarawak.

The scientists said they tracked the frogs by their call, a series of “harsh rasping notes” that started at sundown.

They then made the frogs jump onto a piece of white cloth to study them.

The find was part of a global search being undertaken by Conservation International and International Union for Conservation of Nature’s Amphibian Specialist Group to “rediscover” 100 species of lost amphibians.

Phew! It’s about time we had some good news about amphibians.

Between climate change, disappearing habitat and – no doubt – some lounge lizard discovering they make a tidy snack, it feels like we’re losing little critters faster than ever.

Google discloses demands for censorship, user data

The country-by-country breakdown just released on Google’s Web site marks the first time that the Internet search leader has provided such a detailed look at the censorship and data requests that it gets from regulators, courts and other government agencies. The figures, for the roughly 100 countries in which it operates, cover the final half of last year and will be updated every six months.

Google posted the numbers nearly a month after it began redirecting search requests to its China-based service. Those requests are now handled in Hong Kong rather than mainland China so Google wouldn’t have to obey the country’s Internet censorship laws. Google said details about the censorship demands it got while in mainland China still aren’t being shared because the information is classified as a state secret.

In other countries, Google is making more extensive disclosures about censorship demands or other government requests to edit its search results. Google is also including demands to remove material from its other services, including the YouTube video site, although it is excluding removal requests related to allegations of copyright infringement, a recurring problem for YouTube.

Google is providing a more limited snapshot of government requests for its users’ personal information. The numbers are confined primarily to demands made as part of criminal cases, leaving out civil matters such as divorces. And Google isn’t revealing how often it cooperated with those data demands.

The disclosure comes as more regulators and consumers watchdogs around the world are complaining that the company doesn’t take people’s privacy seriously enough. Google maintains that its users’ privacy is one of the company’s highest priorities. The company also notes that, in one instance, it has gone to court to prevent the U.S. Justice Department from getting broad lists of people’s search requests.

I wonder if the number of requests from the DOJ has diminished?

Euro court rules arbitrary “stop and search” illegal


Photo by Marc Vallee

The ability of UK police to use “arbitrary” counter-terror stop and search powers against peace protesters and photographers lay in tatters today after a landmark ruling by the European court of human rights.

The Strasbourg court ruled it was unlawful for police to use the powers, under section 44 of the Terrorism Act 2000, to stop and search people without needing any grounds for suspicion.

The widely-drawn ruling said that not only the use of the counter-terror powers, but also the way they were authorised, were “neither sufficiently circumscribed, nor subject to adequate legal safeguards against abuse”…

A political furore ensued when it was disclosed that the whole of Greater London had been secretly designated for stop and search without suspicion since 2001…

The use of the powers is subject to confirmation by the home secretary within 48 hours, renewable after 28 days, but the European court said there was no real check on authorisations by parliament or the courts.

This was demonstrated by the continuous renewal, every 28 days, of the use of the powers in London since their first introduction, the judges added.

They said they were further concerned that the decision to stop and search somebody was “based exclusively on the ‘hunch’ or ‘professional intuition’ of the police officer”.

Another positive step forward against the regressive political powers handed over to governments in the wave of fear-mongering since 9/11.

Cowards always yearn for power that matches their favorite tyrants – and will fight to the last drop of our own blood to supersede civil rights and civil liberties. Both sides of the pond. Nice to see a bit of success at fightback.

Google will include real-time data in search results

Google has introduced so-called “real-time web” results into its search engine.

It means that Google will display information from news organisations, blogs and platforms, such as Twitter, as soon as it is published.

Google said it was “the first time” that a search engine had integrated the real-time web into its results page…

Our users will get the results as they are produced,” said Google fellow Amit Singhal at an event in Mountain View in California…

The real-time data will be displayed in a constantly updating stream within the normal results page.

Har! This should really contribute to world-class paranoia over search engines in general and Google in particular.