A 14-year-old boy may have forever changed the way the auto industry views cyber security.
He was part of a group of high-school and college students that joined professional engineers, policy-makers and white-hat security experts for a five-day camp last July that addressed car-hacking threats…
With some help from the assembled experts, he was supposed to attempt a remote infiltration of a car, a process that some of the nation’s top security experts say can take weeks or months of intricate planning. The student, though, eschewed any guidance. One night, he went to Radio Shack, spent $14 on parts and stayed up late into the night building his own circuit board.
The next morning, he used his homemade device to hack into the car of a major automaker. Camp leaders and automaker representatives were dumbfounded. “They said, ‘There’s no way he should be able to do that,'” Brown said Tuesday, recounting the previously undisclosed incident at a seminar on the industry’s readiness to handle cyber threats. “It was mind-blowing.”
Windshield wipers turned on and off. Doors locked and unlocked. The remote start feature engaged. The student even got the car’s lights to flash on and off, set to the beat from songs on his iPhone. Though they wouldn’t divulge the student’s name or the brand of the affected car, representatives from both Delphi and Battelle, the nonprofit that ran the CyberAuto Challenge event, confirmed the details…
“It was a pivot moment,” said Dr. Anuja Sonalker, lead scientist and program manager at Battelle. “For the automakers participating, they realized, ‘Huh, the barrier to entry was far lower than we thought.’ You don’t have to be an engineer. You can be a kid with $14.”
She described the breach as more of a nuisance attack, and emphasized that, in this case, no critical safety functions, like steering, braking or acceleration, were compromised. But the incident underscored just how vulnerable cars have become.
None of this is geek news. Nor is is there any surprise to this display of auto industry leaders’ ignorance of the vulnerability of their tech, the sophisticated toolkits of hardware and software available to even kid-level hackers.
European manufacturers experienced something similar a few years back and revised their engineering designs to match reality. Some more successfully than others, some less so. Why American corporate leaders didn’t pay attention and learn speaks to how parochial, insular, most Americans are. Another part of that corporate [and political] personality is native to imperial populations. If you have the most power you think you must also know best how to do anything.
In fact, reality, especially when much of your culture is well past its peak, contradicts that belief.
Adding a PIN is so difficult, eh?
New technology about to be deployed by credit card companies will require U.S. consumers to carry a new kind of card and retailers across the nation to upgrade payment terminals. But despite a price tag of $8.65 billion, the shift will address only a narrow range of security issues.
Credit card companies have set an October deadline for the switch to chip-enabled cards, which come with embedded computer chips that make them far more difficult to clone. Counterfeit cards, however, account for only about 37 percent of credit card fraud, and the new technology will be nearly as vulnerable to other kinds of hacking and cyber attacks as current swipe-card systems, security experts say.
Moreover, U.S. banks and card companies will not issue personal identification numbers (PINs) with the new credit cards, an additional security measure that would render stolen or lost cards virtually useless when making in-person purchases at a retail outlet. Instead, they will stick with the present system of requiring signatures…
Chip technology has been widely used in Europe for nearly two decades, but banks there typically require PINs. Even so, the technology leaves data unprotected at three key points, security experts say: When it enters a payment terminal, when it is transmitted through a processor, and when it is stored in a retailer’s information systems. It also does not protect online transactions.
American corporations inside the retail purchasing loop are perfectly willing to expand that to four key points.
Retailers and security experts say it would make more sense for the United States to jump instead to a more secure system, such as point-to-point encryption. This technology is superior to chip-and-PIN, which first was deployed about 20 years ago, because it scrambles data to make it unreadable from the moment a transaction starts.
But the newer technology would cost as much as twice what the chip card transition will cost…
Moreover, some security experts say that mobile payment services such as Apple Pay, a service from Apple that stores data on the cloud, have the potential in coming years to secure payments without the need to swipe or tap a card at all…
Rick Dakin, who is advising a group of banks on payment security, said no industry standard exists for the newer point-to-point encryption systems, and banks and card companies are hesitant to make large-scale investments before the standards are set.
Apparently, 20 years isn’t sufficient time to adopt standards in the United States.
Banks and card companies said a chip card alone can make stolen data less useful for hackers and the technology has worked in reducing counterfeit card fraud in Europe and elsewhere.
Security experts said the shift cannot prevent massive consumer data breaches of the sort that recently hit Target and Home Depot. But the technology will make it more difficult to use stolen data.
The installation of 15 million payment terminals that can read chip cards in the U.S. will cost approximately $6.75 billion. Banks are expected to spend some $1.4 billion to issue new cards and another $.5 billion to upgrade their Automated Teller Machines according to Javelin Strategy & Research.
Beancounters live and die on hindsight – and this is another case of crap decisions being worthless.
What would this conversion have cost in 1995 dollar$? How many billion$ have been lost to fraud, counterfeit credit cards and identity theft? All it took in the first place was a willingness to make security a priority.
During an unannounced visit to Apple’s Covent Garden store
Following comments regarding Apple Watch specifications and an upcoming Apple Store revamp, Cook spoke with the Telegraph in an extensive interview covering data privacy, government snooping, terrorism and more.
The Apple chief is cognizant of the amount of customer information being “trafficked around” by corporations, governments and other organizations, saying data sharing is a practice that goes against Apple’s core philosophies. He said consumers, however, “don’t fully understand what is going on” at present, but “one day they will, and will be very offended.”
“None of us should accept that the government or a company or anybody should have access to all of our private information,” Cook said. “This is a basic human right. We all have a right to privacy. We shouldn’t give it up. We shouldn’t give in to scare-mongering or to people who fundamentally don’t understand the details…”
The publication also asked about implications of terrorism, especially government surveillance operations created with the intent of aiding law enforcement agencies. Cook took a hard-nosed stance on the topic, saying the issue is a non-starter in his book because terrorists use proprietary encryption tools not under the control of U.S. or UK governments.
“Terrorists will encrypt. They know what to do,” Cook said. “If we don’t encrypt, the people we affect [by cracking down on privacy] are the good people. They are the 99.999 percent of people who are good.” He added, “You don’t want to eliminate everyone’s privacy. If you do, you not only don’t solve the terrorist issue but you also take away something that is a human right. The consequences of doing that are very significant…”
The executive reiterated Apple’s mantra of making products, not marketing consumers as products. Every device and service that comes out of Cupertino is designed to store only a minimal amount of customer information, Cook said.
Finally, Cook talked about privacy as it applies to Apple Pay, the fledgling payments service Apple rolled out in October. Unlike other payments processors, Apple designed Apple Pay to reveal little to no information to outside parties, including itself.
“If you use your phone to buy something on Apple Pay, we don’t want to know what you bought, how much you paid for it and where you bought it. That is between you, your bank and the merchant,” Cook said. “Could we make money from knowing about this? Of course. Do you want us to do that that? No. Would it be in our value system to do that? No. We’ve designed [Apple Pay] to be private and for it to be secure.”
I love the privacy of Apple Pay. I haven’t stopped smiling since the first time a checkout clerk exclaimed…”It doesn’t even tell me your name!”
This is excerpted from a long interview in the TELEGRAPH – worth reading.
Arizona mother Cathy Seymour’s 16-year-old son was arrested in August 2013 for allegedly shooting a detention officer to death and was charged with first-degree murder as an adult and held in a jail.
Now she uses her laptop and a video link to spring him from maximum security detention in the 4th Avenue Jail in downtown Phoenix, take him on a virtual tour of some of his favorite places and visit with family and friends.
“If there’s Wi-Fi and you have a laptop, you don’t have to stay in your home,” she says of the recently installed pay-per-view system that links a video terminal in the jail to her laptop at a cost of $5 for 20 minutes.
“His favorite spot is McDonald’s, so we went to McDonald’s … I’ll show him, like, the street … He gets to see other people … He gets to see my mom and dad and church,” said Seymour, who spoke to Al Jazeera America on the condition that her son not be named.
She is among thousands of family members nationwide using pay-per-view video chats to connect with loved ones who are incarcerated. The technology is gaining traction in jail systems across the U.S. in a push by the for-profit prison industry to monetize inmate contact.
At the end of 2014, 388 U.S. jails — about 1 in 8 — offered pay-per-view video visits, and the service was also available in 123 prisons, according to a study by the nonprofit Prison Policy Initiative (PPI).
Since the report was published in January, the PPI has become aware of at least 25 additional jails that have implemented the technology. Once video visitation systems are in place, most jails eliminate in-person family visits, securing a captive market for private firms. Seven companies dominate the market, and for 20 minutes, they charge from $5 in Maricopa County, Arizona, to $29.95 in Racine County, Wisconsin…
For Seymour, the pay-per-view video visits help her maintain a relationship with her teenage son, with whom she shares as many as four video chats a day. “He’s in an ugly place now … I don’t agree with the sheriff on much, but there is benefit to it,” she said of the system…
The boom in for-profit video visitation is also transforming the way lawyers work with their clients. Some criminal defense attorneys, like Marci Kratter in Phoenix, find much to like.
Before the system went live in November, Kratter had to drive to a jail, park, sign in and go to a visitation area to wait for her client in what she described as an “at least a two-hour ordeal.” Now with video visitation, “it’s 20 minutes. You do it from your desk … As far as rapport building goes and trust, when you can check in with [your clients] every week, they know you’re thinking about them.”
RTFA. Many variations on the theme – as you would expect. A predictable number of jailers are more interested in vacuuming every last greenback from the wallets of relatives, friends, lawyers. Some are more interested in security. You ain’t smuggling in smack or a cell phone over an internet connection.
There is a lawsuit started by defense attorneys in Travis County against Securus, the sheriff’s office and other county officials. It charges that video visits were used to illegally record attorneys’ confidential calls with their clients…using the info gained against clients and other prisoners. I’d be shocked, shocked I tell you – if something like that actually happened.
Y’all know how deeply we trust law enforcement in America. Right?
You can’t make this stuff up.
MCX, the retailer consortium behind Apple Pay competitor CurrentC, has already been hacked, according to an email sent out to those people who have signed up for, or downloaded, the CurrentC app…
A spokeswoman confirmed that the email is real.
MCX, which is a consortium of dozens of retailers including Walmart, Best Buy, Target, Kohl’s and CVS, say that no other information has been taken but that the investigation is continuing. The “unauthorized third parties” were able to access email addresses of people who were part of the app’s private beta testing program as well as email addresses of people who simply signed up to access the app when it launches publicly…
MCX confirmed this morning that its member companies have promised to only support CurrentC. MCX was formed in large part to create a mobile app that would persuade shoppers to pay through their phone with their checking account or store-branded plastic. The retailers’ goal here was to cut down on the transaction fees it has to pay banks and credit card networks on traditional credit card purchases. That is likely a big reason why it opposes Apple Pay, which supports those traditional cards.
But the hack now raises big questions about whether shoppers will trust CurrentC app with their sensitive financial information when it launches; the app asks for users’ social security number and driver’s license information if they want to link their bank account with the app. The app does not currently let users pay with their traditional credit card accounts, though an MCX blog post published this morning said it would eventually support credit cards, though it didn’t provide details on which kinds. Until CurrentC launches, customers shopping at MCX stores will be left with the choice of using cash or traditional magstripe cards which have proved to be easy to clone.
By banning Apple Pay, which is built into the new line of iPhones, merchants are choosing to ban a more secure payment method. Apple Pay customers can use a wide range of credit and debit card accounts to make purchases. Users have to authorize a transaction by pressing their finger against the phone’s fingerprint sensor. The phone then sends payment information to a store’s checkout equipment, though it comes in the form of a stand-in string of characters known as a token and does not include an actual credit or debit card number.
Our household has already switched over to ApplePay. More than anything, we love the anonymity and security. No one gets to see our credit card number. Not even our name.
In a document (PDF link) meant to guide law enforcement officers in requesting user information, Apple notes that it no longer stores encryption keys for devices with iOS 8, meaning agencies are unable to gain access even with a valid search warrant. This includes data store on a physical device protected by a passcode, including photos, call history, contacts and more.
“Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data,” Apple said on its new webpage dedicated to privacy policies. “So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.”
The safeguards do not apply to other services including iCloud, however, meaning any data stored offsite is fair game for government seizure. Still, the security implementation will likely be seen as a step in the right direction, especially given the current political climate following revelations of governmental “snooping” activities.
Overdue. As Edward Snowden suggested, encryption is still one of the best ways to frustrate government snooping. A standard that other tech companies might emulate even if it gets in the way of their monetization of your data.
The federal government is inching closer to mandating cars have the ability to communicate with each other, in a move regulators say could reduce crashes while still protecting motorists’ personal information..
Called vehicle-to-vehicle communication (V2V), the technology would use radio frequencies to communicate potential dangers to drivers, and the Transportation Department has begun the rule-making process of possibly making it required equipment in cars, though it could take years for a new law to take effect…
“By warning drivers of imminent danger, V2V technology has the potential to dramatically improve highway safety,” said NHTSA Deputy Administrator David Friedman said in a statement.
NHTSA also said vehicle communication could be used to assist in blind-spot detection, forward-collision alarms and warnings not to pass, though many of these technologies are available in today’s cars using other technologies, like radar.
Mindful of recent “hacking” incidents involving major retailers, websites and identity theft, NHTSA said the data transmitted would only be used for safety purposes, and notes the systems being considered would contain “several layers” of security and privacy protection.
On one hand, I’ve been following this development from car manufacturers who wish to use tech like this for accident prevention. Mercedes is a leader on this side of the research.
On the other, is there anyone left in America who trusts the government enough to buy into this technology. Even if security from hackers might be guaranteed, does anyone think the Feds would pass up backdoor access to keep an eye on us?
Square announced it was developing a new credit card reader that would allow businesses to begin accepting a more secure type of credit card being rolled out in the U.S. over the next 15 months.
The announcement comes as credit cards embedded with microchips finally begin to reach American consumers. The cards, which have been common for a decade in many other parts of the world, are believed to be harder to clone than traditional stripe cards.
Hustlers in Europe will agree.
Beginning in October 2015, liability for credit card fraud will sit with whichever entity — the issuer or the merchant — is using the less secure equipment. So a merchant would be penalized if it doesn’t have the equipment to accept chip cards and suffers an unauthorized purchase with a card that had a chip in it. On the other hand, the bank would be liable if it doesn’t issue chip cards and one of its customers makes an unauthorized transaction with a traditional card at a store that accepts chip cards…
Square makes the point this will enable expansion into other markets.
I’m not certain how that statement fits into Square’s growth plans. Are they taking advantage of opportunities opening up because they have to make this change, anyway – or is this around the time when they planned on moving into Europe.
Either way, I admit to liking the usability and design of their hardware/software packages.
It seems as though every week or so there’s a new hack or exploit that reveals millions of passwords or important data from a popular web service, and this week is no exception. On Tuesday, IT professionals got word of a serious flaw in OpenSSL — the browser encryption standard used by an estimated two-thirds of the servers on the internet. The flaw, which was dubbed “Heartbleed,” may have exposed the personal data of millions of users and the encryption keys to some of the web’s largest services. Here’s what you need to know:
It’s a bug in some versions of the OpenSSL software that handles security for a lot of large websites. In a nutshell, a weakness in one feature of the software — the so called “heartbeat” extension, which allows services to keep a secure connection open over an extended period of time — allows hackers to read and capture data that is stored in the memory of the system. It was discovered independently by a security company called Codenomicon and a Google researcher named Neel Mehta, both of whom have helped co-ordinate the response…
As Tim Lee at Vox points out in his overview, the lock that you see in your browser’s address bar when you visit a website “is supposed to signal that third parties won’t be able to read any information you send or receive. Under the hood, SSL accomplishes that by transforming your data into a coded message that only the recipient knows how to decipher.” But researchers found it was possible to “send a cleverly formed, malicious heartbeat message that tricks the computer at the other end into divulging secret information…”
If you are a web user, the short answer is not much. You can check the list of sites affected on Github, or you could try a tool from developer Filippo Valsorda that checks sites to see if they are still vulnerable (although false positives have been reported), and you should probably change your passwords for those sites if you find any you use regularly.
RTFA if you want all the gory details. The bug is 2 years old albeit just discovered; so, no one has a clue how long evildoers may have been screwing around with folks’ accounts at sites containing the bug.
I’d suggest reading the list at Github and staying away from sites on the list – until they disappear from the list. Changing passwords – as suggested – at affected sites is a good idea as well. Though I can think of problems happening if you’re pinged while doing exactly that. If and when sites are certified clean, then, change your passwords and do a thorough job of it.
UPDATE: NSA scumbags knew about the bug for two years and used it to break into encrypted communications – rather than notify American companies and consumers so they might protect themselves…http://tinyurl.com/mq8owa2