Feeds:
Posts
Comments

In August, 2013, I blogged about an insurance company’s latest product feature that enabled their customers to download all of their insurance verification documents to their cellphone through a software application. The marketing company devised a commercial whereby a pig driving a car was pulled over, and subsequently the pig handed his cellphone over to the officer, presumably to show the officer that he had insurance information (I didn’t make this up). At that time, I suggested there would be significant unintended consequences to people who turned over their cellphone to a police agency.

In a unanimous decision Wednesday, the Supreme Court of the United States ruled that police officers need a search warrant to search cellphones of individuals arrested. This decision would likely apply to tablets and laptop computers, as well as potentially searches of homes and businesses and information held by third parties, like phone companies or cloud providers. Chief Justice John G. Roberts stated that cellphones are “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.” What’s more interesting is that, in writing for the majority of the Court, Chief Justice Roberts acknowledges the fact that cellphones are more than a device that you merely speak into and listen – a truly forward thinking statement for such a traditional body of government.

When looking at this in the context of the insurance company’s phone app product, there still are significant unintended consequences that people need to be made aware. Namely, while the new law prohibits warrantless searches of cellphones, relinquishment of a cellphone to a police agency is still not advisable. They can simply confiscate the device, and then go get a warrant to search it at a later date.

The Court of Justice of the European Union issued a ruling on May 13, 2014, whereby, under certain circumstances, search engine providers, like Google, are required to remove links of Web pages containing information that is made on the basis of a person’s name and published by a third-party. The “Right to be Forgotten” lawsuit is a landmark case for EU member countries, and was derived from a Spaniards claim that search results from Google’s website disclosed details about an auction of his repossessed home over unpaid debts that was resolved many years prior and presently irrelevant.

In the wake of the EU Courts ruling, Google has posted a “Remove Information From Google” page for Europeans to request takedowns. Based on reports, it appears the number of takedown requests averages about 10,000 request per day, and growing. That number may seem high, but EU Justice Commissioner, Viviane Reding notes that Google receives and complies with millions of copyright-related takedown requests. EU regulators plan to give search engine companies time to adjust to the ruling, and define just exactly what compliance with the law should look like.

For application purposes in the United States, the ruling does raise questions as to third-party usage of public information. Query a persons name in any search engine provider, and Internet ads for information on birth and arrest records are sure to come up. Many people with similar names are then subjected to potential unnecessary embarrassment and ridicule. However, the application of the term “privacy” is entirely different in the EU than U.S., and that is why it is unlikely that the “Right to be Forgotten” will soon land upon the shores of America.

Several news outlets reported yesterday that the Federal Trade Commission (“FTC”) is urging Congress to demand that data brokers tell consumers more about their trade practices in how they collect and use consumer information. Data brokers are companies that assemble digital profiles on nearly every U.S. consumer by gathering information from credit- and debit-card transactions, public records, online tracking cookies, and smartphones, among other sources.

The FTC, in its report, concluded that there is a “fundamental lack of transparency[,]” in how data brokers go about collecting consumer information. FTC Chairwoman, Ms. Edith Ramirez, states that data brokers often “know as much – or even more – about us than our family and friends, including our online and in-store purchases, our political and religious affiliations, our income and socioeconomic status, and more.” The report, two years in the making, finds no actual harm to consumers, and only suggests potential misuses that do not occur. It also goes into depth on how data brokers operate.

The report concludes that Congress should require the creation of an Internet website whereby data brokers must disclose the sources of data they collect about consumers, and give the consumers the opportunity to opt-out. The reality of anything become law in the near future seems highly unlikely at best. Similar legislation introduced in February has gained little traction.

The Wall Street Journal reported yesterday that several software antivirus companies are reinventing their business models after decades of trying to prevent hackers from penetrating its customers IT infrastructure.  According to Mr. Brian Dye, Senior Vice President of Information Security, at Symantec, Corp., the antivirus “is dead” from a money-maker perspective.  Rather than try to thwart hackers, by keeping them out of a business’s IT network, software antivirus companies now assume hackers can get in (or are already there!), and, for a fee, will sell products and services that will provide customers with intelligence briefings that tell them two things: (1) their business is under attack; and (2) why their business is getting attacked.  However, what a customer really wants to know when it’s IT network is under attack is, how do we make it stop?

While, in and of itself, the new business model shift may create an overall issue of product integrity for the software antivirus industry (e.g. the intended purpose is not capable of being met), there is a silver lining in the message being sent.  Namely, on its face, software antivirus products and services alone will not prevent mission-critical business data from being released in an unauthorized manner.  A holistic comprehensive corporate data governance model is still the proper risk management step for organizations to employ.  Antivirus detection services from IT security companies like Symantec, Corp., Juniper Networks, Inc., FireEye, Inc., and Shape Security, Inc., are important, but should not be solely relied upon by the business organization.

The sales of cyber-insurance policies has spiked sharply this year, mainly due to the increased attention and scrutiny of massive data breaches from Target and Neiman Marcus over the last holiday season. Also, in what was once an uncommon occurrence, banks are now suing retailers who have been victimized by hackers accessing mission-critical data. These threats against corporate data have finally caused many businesses to seek out risk management practices, such as insurance coverage, to protect against loss.

In general, most cyber-insurance policies cover the cost of a data breach investigation, customer notification and credit-monitoring services, as well as legal expenses and damages resulting from consumer class action litigation. According to The Wall Street Journal today, general liability insurers are expected to adopt language specifically excluding damages arising out of cyber-attacks. The nuances of the policies have still not been perfected, and companies should have an attorney who understands this area of law examine the scope of coverage contained within each policy. Doing due diligence will help organizational leaders better determine if the insurance premium benefits the company from a cost-savings standpoint.

In a report published today in The Wall Street Journal, the Obama White House was presented with the final four recommendations for restructuring the National Security Agency’s (“NSA”) controversial bulk collection of data.  As one would imagine, none of the four options available are perfect, but they include: (1) Abolishment of the entire program itself; (2) have the phone companies retain the data; (3) have a government agency, other than the NSA (e.g. FBI), hold onto the data; or(4) have an entity outside the phone company and government hold onto the data. 

When looking at the last 3 options objectively, it would seem that the problem really never goes away.  Private phone companies will become quasi-government agencies, and is it feasible to think, given its past history under J. Edgar Hoover, that the FBI would be trustworthy enough not hold onto the data?  Judges are apprehensive about expanding the role of the U.S. judicial system to such an oversight role, but the appointment of a “special master” which oversees how the data is collected would be just what the NSA program needs if it is to sustain itself long-term.  Arguably, the main role, or function, of our government is to protect and safeguard the citizens of the United States, and the first option, or “nuclear” option, would deal a major blow to intelligence efforts on the national security level.

In response to the thousands of mobile applications hitting the market that often rely on consumer data (i.e. contact information, location, photos, etc.), the Federal Trade Commission (“FTC”) released a suggested list of security guidelines for mobile app developers to follow when designing a program. While a no one-size-fits-all checklist can exist, the FTC views these security tips as a way to help protect the developers, consumers, and reputation of the app. The following are 12 suggested security guidelines for mobile application developers to consider:

1. Make someone responsible for security;
2. Take stock of the data you collect and retain;
3. Understand the differences between mobile platforms;
4. Don’t rely on a platform alone to protect your users;
5. Generate credentials securely;
6. Use transit encryption for usernames, passwords, and other important data;
7. Use due diligence on libraries and other third-party code;
8. Consider protecting data you store on a user’s device;
9. Protect your servers, too;
10. Don’t store passwords in plaintext;
11. You’re not done once you release your app. Stay aware and communicate with your users;
12. If you’re dealing with financial data, health data, or kids’ data, make sure you understand applicable standards and regulations

Before getting into the core aspects of this security guideline, make sure to evaluate the ecosystem upon which the app will reside. The FTC comments that while it is important to get the mobile app working and accepted by an app store, a critical third step, the anticipation and prevention of security glitches, is vital to the apps long-term viability.

Follow

Get every new post delivered to your Inbox.