Случайны выбор дневника Раскрыть/свернуть полный список возможностей

Найдено 1341 сообщений
Cообщения с меткой

ai - Самое интересное в блогах

Следующие 30  »

"Intellectual Debt": It's bad enough when AI gets its predictions wrong, but it's potentially WORSE when AI gets it right

Воскресенье, 28 Июля 2019 г. 17:48 (ссылка)

Jonathan Zittrain (previously) is consistently a source of interesting insights that often arrive years ahead of their wider acceptance in tech, law, ethics and culture (2008's The Future of the Internet (and how to stop it) is surprisingly relevant 11 years later); in a new long essay on Medium (shorter version in the New Yorker), Zittrain examines the perils of the "intellectual debt" that we incur when we allow machine learning systems that make predictions whose rationale we don't understand, because without an underlying theory of those predictions, we can't know their limitations.

Zittrain cites Arthur C Clarke's third law, that "any sufficiently advanced technology is indistinguishable from magic" as the core problem here: like a pulp sf novel where the descendants of the crew of a generation ship have forgotten that they're on a space-ship and have no idea where the controls are, the system works great so long as it doesn't bump into anything the automated systems can't handle, but when that (inevitably) happens, everybody dies when the ship flies itself into a star or a black hole or a meteor.

In other words, while machine learning presents lots of problems when it gets things wrong (say, when algorithmic bias enshrines and automates racism or other forms of discrimination) at least we know enough to be wary of the predictions produced by the system and to argue that they shouldn't be blindly followed: but if a system performs perfectly (and we don't know why), then we come to rely on it and forget about it and are blindsided when it goes wrong. Read the rest


Метки:   Комментарии (0)КомментироватьВ цитатник или сообщество

A generalized method for re-identifying people in "anonymized" data-sets

Среда, 24 Июля 2019 г. 18:23 (ссылка)

"Anonymized data" is one of those holy grails, like "healthy ice-cream" or "selectively breakable crypto" -- if "anonymized data" is a thing, then companies can monetize their surveillance dossiers on us by selling them to all comers, without putting us at risk or putting themselves in legal jeopardy (to say nothing of the benefits to science and research of being able to do large-scale data analyses and then publish them along with the underlying data for peer review without posing a risk to the people in the data-set, AKA "release and forget").

As the old saying goes: "wanting it badly is not enough." Worse still, legislatures around the world are convinced that because anonymized data would be amazing and profitable and useful, it must therefore be possible, and they've made laws that say, "once you've anonymized this data, you can treat it like it is totally harmless," without ever saying what "anonymization" actually entails.

Enter a research team from Imperial College London and Belgium's Universit'e Catholique de Louvain, whose Nature article Estimating the success of re-identifications in incomplete datasets using generative models shows that they can reidentify "99.98 percent of Americans from almost any available data set with as few as 15 attributes." That means that virtually every large-scale, anonymized data-set for sale or circulating for scientific research purposes today is not anonymized at all, and should not be circulating or sold. (Rob discussed this earlier today)

The researchers chose to publish their method rather than keep it a secret so that people who maintain these data-sets can use it to test whether their anonymization methods actually work (Narrator: They don't). Read the rest


Метки:   Комментарии (0)КомментироватьВ цитатник или сообщество

Interactive map of public facial recognition systems in America

Четверг, 18 Июля 2019 г. 19:51 (ссылка)

Evan Greer from Fight for the Future writes, "Facial recognition might be the most invasive and dangerous form of surveillance tech ever invented. While it's been in the headlines lately, most of us still don't know whether it's happening in our area. My organization Fight for the Future has compiled an interactive map that shows everywhere in the US (that we know of) facial recognition being used -- but also where there are local efforts to ban it, like has already happened in San Francisco, Oakland, and Somerville, MA. We've also got a tool kit for local residents who want to get an ordinance or state legislation passed in their area."

Read the rest


Метки:   Комментарии (0)КомментироватьВ цитатник или сообщество

Many of the key Googler Uprising organizers have quit, citing retaliation from senior management

Вторник, 17 Июля 2019 г. 03:01 (ссылка)

The Googler Uprising was a string of employee actions within Google over a series of issues related to ethics and business practices, starting with the company's AI project for US military drones, then its secretive work on a censored/surveilling search tool for use in China; then the $80m payout to Android founder Andy Rubin after he was accused of multiple sexual assaults.

Tens of thousands of Google employees participated in the uprising, including 20,000 who walked off the job in February. The activist Google employees moved from victory to victory, including the ouster of a a transphobic, racist, xenophobic ideologue who had been appointed to Google's "AI Ethics" board.

Two key organizers, Meredith Whittaker and Claire Stapleton, publicly accused the company of targeting them for retaliation in April (to enormous internal uproar).

Now, Whittaker has resigned (on the thirteenth anniversary of her employement with Google), along with Celie O’Neil-Hart, who had been global head of trust and transparency marketing at YouTube Ads, and Google News Labs' Erica Anderson.

In Whittaker's farewell note to her colleagues, she calls on them to "unionize — in a way that works," "protect conscientious objectors and whistleblowers," "demand to know what you’re working on, and how it’s used" and "build solidarity with those beyond the company." She says that Google's entry into "new markets" like "healthcare, fossil fuels, city development and governance, transportation, and beyond...is gaining significant and largely unchecked power to impact our world (including in profoundly dangerous ways, such as accelerating the extraction of fossil fuels and the deployment of surveillance technology)."

Whittaker will devote her work to AI Now, the group she co-founded to build and promulgate critical, ethical frameworks for AI research. Read the rest


Метки:   Комментарии (0)КомментироватьВ цитатник или сообщество

China's AI industry is tanking

Четверг, 11 Июля 2019 г. 21:21 (ссылка)

In Q2 2018, Chinese investors sank $2.87b into AI startups; in Q2 2019, it was $140.7m.

It's part of a massive slowdown in China's AI industry, which kicked off with massive political/economic fanfare from the Chinese state, which promised that the sector would be worth $150b by 2030, a boast that touched off anxiety about a global AI arms race.

Two years later, valuations for the companies that bet biggest on AI are plummeting. Baidu has sunk to a valuation of 10% of the worth of rivals Alibaba and Tencent, though they were all level as recently as 2017; the company's top AI scientists have quit, and the company just booked its first losses since 2005.

Many of the early promising AI demos have fizzled or turned out to be smoke-and-mirrors: much-vaunted demonstrations of health-tech companies like Ping An to diagnose diseases early and head them off before they could spread represent mere incremental improvements over techniques that were documented and demonstrated in the 1970s.

The other problem is that China simply can't produce the semiconductors that it needs for AI research; as a nation, China now spends $300b/year importing microchips (largely from South Korea) -- more than it spends on imported oil.

McKinsey, noting China’s modest progress in the field, points to the exponential growth in money and effort required as chips advance: it takes about 500 steps to create a 20nm chip, but 1,500 steps for a smaller 7nm chip.

Meantime, the wealth of fab projects has triggered a scramble for talent and exposed a talent shortfall of more than 400,000 employees.

Read the rest


Метки:   Комментарии (0)КомментироватьВ цитатник или сообщество

AI is like a magic trick: amazing until it goes wrong, then revealed as a cheap and brittle effect

Вторник, 09 Июля 2019 г. 20:28 (ссылка)

I used to be on the program committee for the O'Reilly Emerging Technology conferences; one year we decided to make the theme "magic" -- all the ways that new technologies were doing things that baffled us and blew us away.

One thing I remember from that conference is that the technology was like magic: incredible when it worked, mundane once it was explained, and very, very limited in terms of what circumstances it would work under.

Writing in Forbes, Kalev Leetaru compares today's machine learning systems to magic tricks, and boy is the comparison apt: "Under perfect circumstances and fed ideal input data that closely matches its original training data, the resulting solutions are nothing short of magic, allowing their users to suspend disbelief and imagine for a moment that an intelligent silicon being is behind their results. Yet the slightest change of even a single pixel can throw it all into chaos, resulting in absolute gibberish or even life-threatening outcomes."

And just like magicians, the companies and agencies that use machine learning systems won't let you look behind the scenes or examine the props: Facebook won't reveal its false positive rates or allow external auditors for its machine learning system, which is why Instagram's anti-bullying AI is going to be a fucking catastrophe.

Another important parallel: magic tricks depend on the deliberate cultivation of a misapprehension of what's going on. A magician convinces you that they're doing the same trick three times in a row, while really it's three different tricks, so the hypothesis you develop the first time is invalidated when you see the trick "again" a second time. Read the rest


Метки:   Комментарии (0)КомментироватьВ цитатник или сообщество

Следующие 30  »

<ai - Самое интересное в блогах

Страницы: [1] 2 3 ..
.. 10

LiveInternet.Ru Ссылки: на главную|почта|знакомства|одноклассники|фото|открытки|тесты|чат
О проекте: помощь|контакты|разместить рекламу|версия для pda