Saturday, 31 March 2018

Facebook plans crackdown on ad targeting by email without consent

Facebook is scrambling to add safeguards against abuse of user data as it reels from backlash over the Cambridge Analytica scandal. Now TechCrunch has learned Facebook will launch a certification tool that demands that marketers guarantee email addresses used for ad targeting were rightfully attained. This new Custom Audiences certification tool was described by Facebook representatives to their marketing clients, according to two sources. Facebook will also prevent the sharing of Custom Audience data across Business accounts.

This snippet of a message sent by a Facebook rep to a client notes that “for any Custom Audiences data imported into Facebook, Advertisers will be required to represent and warrant that proper user content has been obtained.”

Once shown the message, Facebook spokesperson Elisabeth Diana told TechCrunch “I can confirm there is a permissions tool that we’re building.” It will require that advertisers and the agencies representing them pledge that “I certify that I have permission to use this data”, she said.

Diana noted that “We’ve always had terms in place to ensure that advertisers have consent for data they use but we’re going to make that much more prominent and educate advertisers on the way they can use the data.” The change isn’t in response to a specific incident, but Facebook does plan to re-review the way it works with third-party data measurement firms to ensure everything is responsibly used. This is a way to safeguard data” Diana concluded.The company declined to specify whether it’s ever blocked usage of a Custom Audience because it suspected the owner didn’t have user consent. ”

The social network is hoping to prevent further misuse of ill-gotten data after Dr. Aleksandr Kogan’s app that pulled data on 50 million Facebook users was passed to Cambridge Analytica in violation of Facebook policy. That sordid data is suspected to have been used by Cambridge Analyica to support the Trump and Brexit campaigns, which employed Custom Audiences to reach voters.

Facebook launched Custom Audiences back in 2012 to let businesses upload hashed lists of their customers email addresses or phone numbers, allowing advertisers to target specific people instead of broad demographics. Custom Audiences quickly became one of Facebook’s most powerful advertising options because businesses could easily reach existing customers to drive repeat sales. The Custom Audiences terms of service require that businesses have “provided appropriate notice to and secured any necessary consent from the data subjects” to attain and use these people’s contact info.

But just like Facebook’s policy told app developers like Kogan not to sell, share, or misuse data they collected from Facebook users, the company didn’t go further to enforce this rule. It essentially trusted that the fear of legal repercussions or suspension on Facebook would deter violations of both its app data privacy and Custom Audiences consent policies. With clear financial incentives to bend or break those rules and limited effort spent investigating to ensure compliance, Facebook left itself and its users open to exploitation.

Last week Facebook banned the use of third-party data brokers like Experian and Acxiom for ad targeting, closing a marketing featured called Partner Categories. Facebook is believed to have been trying to prevent any ill-gotten data from being laundered through these data brokers and then directly imported to Facebook to target users. But that left open the option for businesses to compile illicit data sets or pull them from data brokers, then upload them to Facebook as Custom Audiences by themselves.

The Custom Audiences certification tool could close that loophole. It’s still being built, so Facebook wouldn’t say exactly how it will work. I asked if Facebook would scan uploaded user lists and try to match them against a database of suspicious data, but for now it sounds more like Facebook will merely require a written promise.

Meanwhile, barring the sharing of Custom Audiences between Business Accounts might prevent those with access to email lists from using them to promote companies unrelated to the one to which users gave their email address. Facebook declined to comment on how the new ban on Custom Audience sharing would work.

Now Facebook must find ways to thwart misuse of its targeting tools and audit anyone it suspects may have already violated its policies. Otherwise it may receive the ire of privacy-conscious users and critics, and strengthen the case for substantial regulation of its ads (though regulation could end up protecting Facebook from competitors who can’t afford compliance). Still the question remains why it took such a massive data privacy scandal for Facebook to take a tougher stance on requiring user consent for ad targeting. And given that written promises didn’t stop Kogan or Cambridge Analytica from misusing data, why would they stop advertisers bent on boosting profits?

For more on Facebook’s recent scandals, check out TechCrunch’s coverage:

 



https://ift.tt/2GsTiwJ

Arbtr wants to create an anti-feed where users can only share one thing at a time

At a time when the models of traditional social networks are being questioned, it’s more important than ever to experiment with alternatives. Arbtr is a proposed social network that limits users to sharing a single thing at any given time, encouraging “ruthless self-editing” and avoiding “nasty things” like endless feeds filled with trivial garbage.

It’s seeking funds on Kickstarter and could use a buck or two. I plan to.

Now, I know what you’re thinking. “Why would I give money to maybe join a social network eventually that might not have any of my friends on it on it? That is, if it ever even exists?” Great question.

The answer is: how else do you think we’re going to replace Facebook? Someone with a smart, different idea has to come along and we have to support them. If we won’t spare the cost of a cup of coffee for a purpose like that, then we deserve the social networks we’ve got. (And if I’m honest, I’ve had very similar ideas over the last few years and I’m eager to see how they might play out in reality.)

The fundamental feature is, of course, the single-sharing thing. You can only show off one item at a time, and when you post a new one, the old one (and any discussion, likes, etc) will be deleted. There will be options to keep logs of these things, and maybe premium features to access them (or perhaps metrics), but the basic proposal is, I think, quite sound — at the very least, worth trying.

Some design ideas for the app. I like the text one but it does need thumbnails.

If you’re sharing less, as Arbtr insists you will, then presumably you’ll put more love behind those things you do share. Wouldn’t that be nice?

We’re in this mess because we bought wholesale the idea that the more you share, the more connected you are. Now that we’ve found that isn’t the case – and in fact we were in effect being fattened for a perpetual slaughter — I don’t see why we shouldn’t try something else.

Will it be Arbtr? I don’t know. Probably not, but we’ve got a lot to gain by giving ideas like this a shot.



https://ift.tt/2uyViC3

Who gains from Facebook’s missteps?

When Facebook loses, who wins?

That’s a question for startups that may be worth contemplating following Facebook’s recent stock price haircut. The company’s valuation has fallen by around $60 billion since the Cambridge Analytica scandal surfaced earlier this month and the #DeleteFacebook campaign gained momentum.

That’s a steep drop, equal to about 12 percent of the company’s market valuation, and it’s a decline Facebook appears to be suffering alone. As its shares fell over the past couple of weeks, stocks of other large-cap tech and online media companies have been much flatter.

So where did the money go? It’s probably a matter of perspective. For a Facebook shareholder, that valuation is simply gone. And until executives’ apologies resonate and users’ desire to click and scroll overcomes their privacy fears, that’s how it is.

An alternate view is that the valuation didn’t exactly disappear. Investors may still believe the broad social media space is just as valuable as it was a couple of weeks ago. It’s just that less of that pie should be the exclusive domain of Facebook.

If one takes that second notion, then the possibilities for who could benefit from Facebook’s travails start to get interesting. Of course, there are public market companies, like Snap or Twitter, that might pick up traffic if the #DeleteFacebook movement gains momentum without spreading to other big brands. But it’s in the private markets where we see the highest number of potential beneficiaries of Facebook’s problems.

In an effort to come up with some names, we searched through Crunchbase for companies in social media and related areas. The resulting list includes companies that have raised good-sized rounds in the past couple of years and could conceivably see gains if people cut back on using Facebook or owning its stock.

Of course, people use Facebook for different things (posting photos, getting news, chatting with friends and so on), so we lay out a few categories of potential beneficiaries of a Facebook backlash.

Messaging

Facebook has a significant messaging presence, but it hasn’t been declared the winner. Alternatives like Snap, LINE, WeChat and plain old text messages are also massively popular.

That said, what’s bad for Messenger and Facebook-owned WhatsApp is probably good for competitors. And if more people want to do less of their messaging on Facebook, it helps that there are a number of private companies ready to take its place.

Crunchbase identified six well-funded messaging apps that could fit the bill (see list). Collectively, they’ve raised well over $2 billion — if one includes the $850 million initial coin offering by Telegram.

Increasingly, these private messaging startups are focused on privacy and security, including Wickr, the encrypted messaging tool that has raised more than $70 million, and Silent Circle, another encrypted communications provider that has raised $130 million.

Popular places to browse on a screen

People who cut back on Facebook may still want to spend hours a day staring at posts on a screen. So it’s likely they’ll start staring at something else that’s content-rich, easy-to-navigate and somewhat addictive.

Luckily, there are plenty of venture-backed companies that fit that description. Many of these are quite mature at this point, including Pinterest for image collections, Reddit for post and comment threads and Quora for Q&A (see list).

Granted, these will not replace the posts keeping you up to date on the life events of family and friends. But they could be a substitute for news feeds, meme shares and other non-personal posts.

Niche content

A decline in Facebook usage could translate into a rise in traffic for a host of niche content and discussion platforms focused on sports, celebrities, social issues and other subjects.

Crunchbase News identified at least a half-dozen that have raised funding in recent quarters, which is just a sampling of the total universe. Selected startups run the gamut from The Players’ Tribune, which features first-hand accounts for top athletes, to Medium, which seeks out articles that resonate with a wide audience.

Niche sites also provide a more customized forum for celebrities, pundits and subject-matter experts to engage directly with fans and followers.

Community and engagement

People with common interests don’t have to share them on Facebook. There are other places that can offer more tailored content and social engagement.

In recent years, we’ve seen an increase in community and activity-focused social apps gain traction. Perhaps the most prominent is Nextdoor, which connects neighbors for everything from garage sales to crime reports. We’re also seeing some upstarts focused on creating social networks for interest groups. These include Mighty Networks and Amino Apps.

Though some might call it a stretch, we also added to the list WeWork, recent acquirer of Meetup, and The Guild, two companies building social networks in the physical world. These companies are encouraging people to come out and socially network with other people (even if just means sitting in a room with other people staring at a screen).

Watch where the money goes

Facebook’s latest imbroglio is still too recent to expect a visible impact in the startup funding arena. But it will be interesting to watch in the coming months whether potential rivals in the above categories raise a lot more cash and attract more users.

If there’s demand, there’s certainly no shortage of supply on the investor front. The IPO window is wide open, and venture investors are sitting on record piles of dry powder. It hasn’t escaped notice, either, that social media offerings, like Facebook, LinkedIn and Snap, have generated the biggest exit total of any VC-funded sector.

Moreover, those who’ve argued that it’s too late for newcomers have a history of being proven wrong. After all, that’s what people were saying about would-be competitors to MySpace in 2005, not long before Facebook made it big.



https://ift.tt/2Ij7TuX

Editing a novel – which software makes the final cut?

Friday, 30 March 2018

The real threat to Facebook is the kool-aid turning sour

These kinds of leaks didn’t happen when I started reporting on Facebook eight years ago. It was a tight-knit cult convinced of its mission to connect everyone, but with the discipline of a military unit where everyone knew loose lips sink ships. Motivational posters with bold corporate slogans dotted its offices, rallying the troops. Employees were happy to be evangelists.

But then came the fake news, News Feed addiction, violence on Facebook Live, cyberbullying, abusive ad targeting, election interference, and most recently the Cambridge Analytica app data privacy scandals. All the while, Facebook either willfully believed the worst case scenarios could never come true, was naive to their existence, or calculated the benefits and growth outweighed the risks. And when finally confronted, Facebook often dragged its feet before admitting the extent of the problems.

Inside the social network’s offices, the bonds began to fray. Slogans took on sinister second meanings. The kool-aid tasted different.

Some hoped they could right the ship but couldn’t. Some craved the influence and intellectual thrill of running one of humanity’s most popular inventions, but now question if that influence and their work is positive. Others surely just wanted to collect salaries, stock, and resume highlights but lost the stomach for it.

Now the convergence of scandals has come to a head in the form of constant leaks.

The Trouble Tipping Point

The more benign leaks merely cost Facebook a bit of competitive advantage. We’ve learned it’s building a smart speaker, a standalone VR headset, and a Houseparty split-screen video chat clone.

Yet policy-focused leaks have exacerbated the backlash against Facebook, putting more pressure on the conscience of employees. As blame fell to Facebook for Trump’s election, word of Facebook prototyping a censorship tool for operating in China escaped, triggering questions about its respect for human rights and free speech. Facebook’s content rulebook got out alongside disturbing tales of the filth the company’s contracted moderators have to sift through. Its ad targeting was revealed to be able to pinpoint emotionally vulnerable teens.

In recent weeks, the leaks have accelerated to a maddening pace in the wake of Facebook’s soggy apologies regarding the Cambridge Analytica debacle. Its weak policy enforcement left the door open to exploitation of data users gave third-party apps, deepening the perception that Facebook doesn’t care about privacy.

And it all culminated with BuzzFeed publishing a leaked “growth at all costs” internal post from Facebook VP Andrew “Boz” Bosworth that substantiated people’s worst fears about the company’s disregard for user safety in pursuit of world domination. Even the ensuing internal discussion about the damage caused by leaks and how to prevent them…leaked.

But the leaks are not the disease, just the symptom. Sunken morale is the cause, and it’s dragging down the company. Former Facebook employee and Wired writer Antonio Garcia Martinez sums it up, saying this kind of vindictive, intentionally destructive leak fills Facebook’s leadership with “horror”:

And that sentiment was confirmed by Facebook’s VP of News Feed Adam Mosseri, who tweeted that leaks “create strong incentives to be less transparent internally and they certainly slow us down”, and will make it tougher to deal with the big problems.

Those thoughts weigh heavy on Facebook’s team. A source close to several Facebook executives tells us they feel “embarrassed to work there” and are increasingly open to other job opportunities. One current employee told us to assume anything certain execs tell the media is “100% false”.

If Facebook can’t internally discuss the problems it faces without being exposed, how can it solve them?

Implosion

The consequences of Facebook’s failures are typically pegged as external hazards.

You might assume the government will finally step in and regulate Facebook. But the Honest Ads Act and other rules about ads transparency and data privacy could end up protecting Facebook by being simply a paperwork speed bump for it while making it tough for competitors to build a rival database of personal info. In our corporation-loving society, it seems unlikely that the administration would go so far as to split up Facebook, Instagram, and WhatsApp — one of the few feasible ways to limit the company’s power.

Users have watched Facebook go make misstep after misstep over the years, but can’t help but stay glued to its feed. Even those who don’t scroll rely on it as fundamental utility for messaging and login on other sites. Privacy and transparency are too abstract for most people to care about. Hence, first-time Facebook downloads held steady and its App Store rank actually rose in the week after the Cambridge Analytica fiasco broke. In regards to the #DeleteFacebook movement, Mark Zuckerberg himself said “I don’t think we’ve seen a meaningful number of people act on that.” And as long as they’re browsing, advertisers will keep paying Facebook to reach them.

That’s why the greatest threat of the scandal convergence comes from inside. The leaks are the canary in the noxious blue coal mine.

Can Facebook Survive Slowing Down?

If employees wake up each day unsure whether Facebook’s mission is actually harming the world, they won’t stay. Facebook doesn’t have the same internal work culture problems as some giants like Uber. But there are plenty of other tech companies with less questionable impacts. Some are still private and offer the chance to win big on an IPO or acquisition. At the very least, those in the Bay could find somewhere to work without a spending hours a day on the traffic-snarled 101 freeway.

If they do stay, they won’t work as hard. It’s tough to build if you think you’re building a weapon. Especially if you thought you were going to be making helpful tools. The melancholy and malaise set in. People go into rest-and-vest mode, living out their days at Facebook as a sentence not an opportunity. The next killer product Facebook needs a year or two from now might never coalesce.

And if they do work hard, a culture of anxiety and paralysis will work against them. No one wants to code with their hands tied, and some would prefer a less scrutinized environment. Every decision will require endless philosophizing and risk-reduction. Product changes will be reduced to the lowest common denominator, designed not to offend or appear too tyrannical.

Source: Volkan Furuncu/Anadolu Agency + David Ramos/Getty Images

In fact, that’s partly how Facebook got into this whole mess. A leak by an anonymous former contractor led Gizmodo to report Facebook was suppressing conservative news in its Trending section. Terrified of appearing liberally biased, Facebook reportedly hesitated to take decisive action against fake news. That hands-off approach led to the post-election criticism that degraded morale and pushed the growing snowball of leaks down the mountain.

It’s still rolling.

How to stop morale’s downward momentum will be one of Facebook’s greatest tests of leadership. This isn’t a bug to be squashed. It can’t just roll back a feature update. And an apology won’t suffice. It will have to expel or reeducate the leakers and disloyal without instilling a witchunt’s sense of dread. Compensation may have to jump upwards to keep talent aboard like Twitter did when it was floundering. Its top brass will need to show candor and accountability without fueling more indiscretion. And it may need to make a shocking, landmark act of humility to convince employees its capable of change.

This isn’t about whether Facebook will disappear tomorrow, but whether it will remain unconquerable for the forseeable future.

Growth has been the driving mantra for Facebook since its inception. No matter how employees are evaluated, it’s still the underlying ethos. Facebook has poised itself as a mission-driven company. The implication was always that connecting people is good so connecting more people is better. The only question was how to grow faster.

Now Zuckerberg will have to figure out how to get Facebook to cautiously foresee the consequences of what it says and does while remaining an appealing place to work. “Move slow and think things through” just doesn’t have the same ring to it.



https://ift.tt/2J9fTjg

Facebook’s mission changed, but its motives didn’t

In January, Facebook announced that it would be changing its feed algorithm to promote users’ well-being over time spent browsing content. That’s a relatively new approach for a company whose ethos once centered around “move fast, break things.”

It wasn’t all that long ago (approximately a year and a half before the algorithm change) that Facebook VP Andrew “Boz” Bosworth, published an internal memo called “The Ugly,” which was circulated throughout the company. In it, Boz made it clear to employees that connecting people (i.e. growth) is the main focus at Facebook, at all costs.

Buzzfeed first published the memo, which said:

Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

He goes on:

That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

Facebook launched in 2004 and ushered in a honeymoon period for users. We reveled in uploading photos from our digital cameras and sharing them with friends. We cared about each and every notification. We shared our status. We played Farmville. We diligently curated our Likes.

But the honeymoon is over. Facebook grew to 1 billion active users in 2012. The social network now has over 2 billion active users. A growing number of people get their news from social media. The size and scope of Facebook is simply overwhelming.

And we’ve been well aware, as users and outsiders looking in on the network, that just like any other tool, Facebook can be used for evil.

But there was still some question whether or not Facebook leadership understood that principle, and if they did, whether or not they actually cared.

For a long time, perhaps too long, Facebook adhered to the “Move fast, break things” mentality. And things have certainly been broken, from fake news circulated during the 2016 Presidential election to the improper use of user data by third-party developers and Cambridge Analytica. And that’s likely the tip of the iceberg.

The memo was written long before the shit hit the fan for Facebook. It was published following the broadcast of Antonio Perkins’ murder on Facebook. This was back when Facebook was still insisting that it isn’t a media company, that it is simply a set of pipes through which people can ship off their content.

What is so shocking about the memo is that it confirms some of our deepest fears. A social network, with a population greater than any single country, is solely focused on growth over the well-being of the society it’s built. That the ends, to be a product everyone uses, might justify the means.

Facebook has tried to move away from this persona, however gently. In late 2016, Zuckerberg finally budged on the idea that Facebook is a media company, clarifying that it’s not a traditional media company. Last year, the company launched the Journalism Project in response to the scary growth of fake news on the platform. Zuckerberg even posted full-page print ads seeking patience and forgiveness in the wake of this most recent Cambridge Analytica scandal.

While that all seems like more of a public relations response than actionable change, it’s better than the stoic, inflexible silence of before.

After Buzzfeed published the memo, Boz and Zuckerberg both responded.

Boz said it was all about spurring internal debate to help shape future tools.

Zuck had this to say:

Boz is a talented leader who says many provocative things. This was one that most people at Facebook including myself disagreed with strongly. We’ve never believed the ends justify the means.

We recognize that connecting people isn’t enough by itself. We also need to work to bring people closer together. We changed our whole mission and company focus to reflect this last year.

If Boz wrote this memo to spark debate, it’s hard to discern whether that debate led to real change.

The memo has since been deleted, but you can read the full text below:

The Ugly

We talk about the good and the bad of our work often. I want to talk about the ugly.

We connect people.

That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.

So we connect more people

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.

I know a lot of people don’t want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake, growth tactics are how we got here. If you joined the company because it is doing great work, that’s why we get to do that great work. We do have great products but we still wouldn’t be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.

In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren’t losing out on a bigger picture. But connecting people. That’s our imperative. Because that’s what we do. We connect people.



https://ift.tt/2E7sX4Y

Twitch lays off some employees as part of ‘team adjustments’

Twitch, the Amazon-owned live-streaming platform for gaming, laid off “several” people yesterday, Polygon first reported.

It’s not clear how many people were let go, but according to Polygon, probably no more than 30 people were let go. Twitch has since confirmed the layoffs to TechCrunch.

“Coming off the record-setting numbers shared in our 2017 Retrospective, Twitch is continuing to grow and advance with success stories from Overwatch League to Fortnite’s milestone-setting streams,” a Twitch spokesperson told TC. “In order to maintain this momentum, we have an aggressive growth strategy for 2018 with plans to increase our headcount by approximately 30%. While we’ve conducted team adjustments in some departments, our focus is on prioritizing areas most important for the community.”
https://ift.tt/2IgvbBI

The SteelSeries Arctis Pro lineup is a new high-water mark in comfort and quality

SteelSeries has two new Arctis Pro gaming headsets out, and they pack a lot of tech and versatility into a comfortable, visually attractive package. The SteelSeries Arctis Pro Wireless and Arctis Pro + GameDAC are both incredibly capable headsets that deliver terrific sound, and depending on your system needs, should probably be your first choice when looking for new gaming audio gear.

The Arctis Pro Wireless is, true to its name, wire-free, but also promises lossless 2.4GHz transmission to ensure lag-free audio, too – a must for competitive gaming. The combination of the wireless functionality, the long-wearing comfort of the suspension system headband and the included transmitter base that can hold and charge a swappable battery as well as display all key information on an OLED readout makes this a standout choice.

There are some limitations, however – compatibility is limited to either PS4 or PC for this one, for instance. The wired Arctis Pro (without GameDAC) is compatible with the Xbox One, but both the wireless version and the version that connected to the wired DAC will only work with either Sony’s latest consoles or with a Windows or Mac-based gaming PC.

I’m a bit saddened by that since I’m a big fan of PUBG on Xbox, and also lately of Sea of Thieves, but I also do regularly play PS4 and PC games, and the Arctis Pro Wireless is my weapon of choice now when using either, either for multiplayer or single player games. The wearability and sound quality (which includes DTS X 7.1 surround on PC) is so good that I’ll often opt to use them in place of my actual 5.1 physical surround system, even when I don’t need to chat with anyone.

Other options, like the Turtle Beach Elite Pro Tournament Headset, offer different advantages including more easily accessible fine-tune control over soundscape, balance of chat and game audio and other features, but the SteelSeries offers a less complicated out-of-box experience, and better all-day wearability thanks to taking cues from athletic wear for its materials and design.

The GameDAC option additionally has Hi-Res Audio certificate, which is good if you’re looking to stream FLAC files or high-res audio from services like Tidal. The DAC itself also makes all audio sound better overall, and gives you more equalization options from the physical controller.

The main thing to consider with the Arctis Pro + DAC ($249.99) and the Arctis Pro Wireless ($329.99) is the cost. They’re both quite expensive relative to the overall SteelSeries lineup and those of competitors, too. But in this case, cost really is reflective of quality – channel separation and surround virtualization is excellent on these headsets, and the mic sounds great to other players I talked to as well. Plus, the Pro Wireless can connect to both Bluetooth and the 2.4GHz transmitter simultaneously, so you can use it with your phone as well as your console, and the retractable mic keeps things looking fairly stylish, too.

https://ift.tt/2uxWMwd

Why going serverless can set your developers free

As a relatively new, and potentially less well-known architecture, serverless might require some defining. Indeed, three years ago, this whole area of technology did not exist. 

Serverless means, from a developer’s point of view, that there is no need to setup or manage server infrastructure running your applications. 

This doesn’t mean that servers aren’t involved of course – they are, but with serverless computing developers are shielded from various server management decisions and capacity planning decisions. 

Without having to factor servers into their task load or having the burden of managing and maintaining an infrastructure, developers can focus on what they do best.

What’s more - in the past, they have had to learn how to configure many different services and frameworks including apache, nginx, postfix or php – just to a name a few. 

Now, serverless has liberated developers by allowing them to focus on coding in the languages they best understand. The net effect is that developers have more time to focus on innovation, instead of managing the servers that run their code.

Speed, ease & cost efficiency

So, how is this new tech paradigm being applied? Generally speaking, due to the fact that serverless is relatively new, it lends itself to modern cloud native or microservice applications as opposed to legacy apps.  Serverless computing is most suited to low-performance computing workloads due to the limits on resource imposed by cloud providers.

A particularly compelling use case for serverless is for building APIs due to increased speed. APIs are the software intermediaries that allow two applications to talk to each other. With serverless, developers can build APIs without having to manage any back-end infrastructure. If you’re a front end java script developer for example, now you can build an API much quicker as you don’t have to focus on lower level infrastructure operations and understand multiple coding languages.

In addition to developing APIs more quickly and easily, the use cases for serverless computing extends to traffic flow. APIs might get busy every day, but the peaks can be hard to predict. Let’s take the example of a ticketing company – they will need to cope with huge spikes in traffic when a new show is announced. They will not necessarily need this level of server capacity at all times, but will still need to be prepared for it. With serverless, greater cost efficiency can be achieved as you only pay when requests are coming in and when your code is running.

For start-ups & for large scale enterprises

With lower upfront costs, serverless is great for start-ups – because they only really start paying when they begin to get traffic. Beyond start-ups, enterprises of any size that are looking to develop APIs will find that the benefits are clear – not only is serverless better value for money, it is quick, easy to use, and doesn’t involve taking on a huge amount of risk.

Software developers SiteSpirit are a good example of how serverless architecture can help solve a business challenge by optimising processes. SiteSpirit helps clients create picture-perfect marketing material with a serverless media-library-as-a-service. In highly visual industries such as travel and real estate, marketers need quick and easy access to the right images to design compelling campaigns. With this cloud based service, the pain of storing, tagging, retrieving and manipulating thousands of images is removed —and is built on a serverless architecture to keep costs low.  

Open-source platforms open up innovation

However, concerns remain around proprietary lock-in as code written and deployed in the cloud becomes inextricably linked with the cloud provider at that point in time. This means that applications become optimised for a specific cloud environment, and moving cloud provider could sacrifice performance and responsiveness of an application. This is why many developers are now looking to open source solutions

Deploying this kind of higher order tool, as opposed to running numerous functions that run in parallel will be crucial to optimising time spent on actual coding and building game-changing technology in the cloud.

Serverless: Serving the developer community

As serverless architectures are improved and the parameters are broadened, they are likely to become increasingly popular with developers looking to save time, reduce costs and focus on what they do best.  

Within a couple of years, serverless is likely to become a default mode for the development of cloud-based technologies, liberating developers from back-end infrastructure maintenance and opening up programming to the many not the few. 

In this way, serverless has the potential to provide a whole new way of working for developers; one in which innovation is at the fore.

James Thomas is a developer advocate for IBM Cloud. He  has worked on a wide range of projects in his eight years at the company, and has been a leading open-source developer for a JavaScript toolkit before working on the first commercial system for IBM Watson as the UI Technical Lead.   

https://ift.tt/2J50bWx

Thursday, 29 March 2018

Get AOMEI Backupper Pro free for World Backup Day

To celebrate World Backup Day, AOMEI is giving TechRadar readers the opportunity to download AOMEI Backupper Pro free. This premium backup suite usually retails for US$49.95/£47.99/AU$83.99, making this a very special deal.

You must register your software between March 30 and April 1, so act quickly!

To get your free software, download and install the trial version of AOMEI Backupper Pro, then click the 'Purchase' link at the top and register with the license key AMPR-1R75D-YS3W1-45ZV6.

Registering AOMEI Backupper Pro

With AOMEI Backupper Pro, you can back up anything on your PC – the entire system, a partition, or specific files or folders. You can also clone a whole drive, including your operating system, making it easy to transfer everything to a new hard drive.

Restoring your backed up data is simple too, whether you want everything or just certain files. Everything is explained with clear step-by-step instructions.

AOMEI Backupper Pro also offers tools for checking the integrity of a backup image, creating bootable rescue media (such as a DVD or USB drive), merging multiple incremental backups, and mounting an image to a virtual partition for browsing.

World Backup Day

World Backup Day takes place on March 31, and serves as a reminder of the importance of making regular backups.

It doesn't take much for your important files to be lost – whether it's through theft, hardware damage, a ransomware attack or accidental deletion – and once they're gone they can be impossible to restore. You never know when disaster might strike.

We store a huge amount of our professional and personal lives digitally, and a regular backup plan is the only way to make sure they're properly protected.

https://ift.tt/2Gi6jwZ

Facebook tries to prove it cares with “Fighting Abuse @ Scale” conference

Desperate to show it takes thwarting misinformation, fraud, and spam seriously, Facebook just announced a last-minute “Fighting Abuse @Scale” conference in San Francisco on April 25th. Speakers from Facebook, Airbnb, Google, Microsoft, and LinkedIn will discuss how to stop fake news, prevent counterfeit account creation, using honeypots to disrupt adversarial infrastructure, and how machine learning can be employed to boost platform safety.

Fighting Abuse @Scale will be held at the Bespoke Event Center within the Westfield Mall in SF. We can expect more technical details about the new proactive artificial intelligence tools Facebook announced today during a conference call about its plans to protect election integrity. The first session is titled “Combating misinformation at Facebook” and will feature an engineering director and data scientists from the company.

Facebook previously held “Fighting Spam @Scale” conferences in May 2015 and November 2016 just after the Presidential election. But since then, public frustration has built up to a breaking point for the social network. Russian election interference, hoaxes reaching voters, violence on Facebook Live, the ever-present issue of cyberbullying, and now the Cambridge Analytica data privacy scandal have created a convergence of backlash. With its share price plummeting, former executives speaking out against its impact on society, and CEO Mark Zuckerberg on a media apology tour, Facebook needs to show this isn’t just a PR problem. It needs users, potential government regulators, and its own existing and potential employees to see it’s willing to step up and take responsibility for fixing its platform.



https://tcrn.ch/2Gkhrcu

iOS 11 and iOS 11.3 features and updates

Instagram reenables GIF sharing after GIPHY promises no more racism

A racial slur GIF slipped into GIPHY’s sticker library earlier this month, prompting Instagram and Snapchat to drop their GIPHY integrations. Now Instagram is reactivating after GIPHY confirmed its reviewed its GIF library four times and will preemptively review any new GIFs it adds. Snapchat said it had nothing to share right now about whether it’s going to reactivate GIPHY.

“We’ve been in close contact with GIPHY throughout this process and we’re confident that they have put measures in place to ensure that Instagram users have a good experience” an Instagram spokesperson told TechCrunch. GIPHY told TechCrunch in a statement that “To anyone who was affected: we’re sorry. We take full responsibility for this recent event and under no circumstances does
GIPHY condone or support this kind of content . . . We have also finished a full investigation into our content moderations systems and processes and have made specific changes to our process to ensure soemthing like this does not happen again.”

We first reported Instagram was building a GIPHY integration back in January before it launched a week later, with Snapchat adding a similar feature in February. But it wasn’t long before things went wrong. First spotted by a user in the U.K. around March 8th, the GIF included a racial slur. We’ve shared a censored version of the image below, but warning, it still includes graphic content that may be offensive to some users.

When asked, Snapchat told TechCrunch ““We have removed GIPHY from our application until we can be assured that this will never happen again.” Instagram wasn’t aware that the racist GIF was available in its GIPHY integration until informed by TechCrunch, leading to a shut down of the feature within an hour. An Instagram spokesperson told TechCrunch “This type of content has no place on Instagram.” After 12 hours of silence, GIPHY responded the next morning, telling us “After investigation of the incident, this sticker was available due to a bug in our content moderation filters specifically affecting GIF stickers.”

The fiasco highlights the risks of major platforms working with third-party developers to brings outside and crowdsourced content into their apps. Snapchat historically resisted working with established developers, but recently has struck more partnerships particularly around augmented reality lenses and marketing service providers. While it’s an easy way to provide more entertainment and creative expression tools, developer integrations also force companies to rely on the quality and safety of things they don’t fully control. As Instagram and Snapchat race for users around the world, they’ll have to weigh the risks and rewards of letting developers into their gardens.

GIPHY’s full statement is below.

CHANGES TO GIPHY’S STICKER MODERATION
Before we get into the details, we wanted to take a moment and sincerely apologize for the
deeply offensive sticker discovered by a user on March 8, 2018. To anyone who was affected:
we’re sorry. We take full responsibility for this recent event and under no circumstances does
GIPHY condone or support this kind of content.
The content was immediately removed and after investigation a bug was found in our content
moderation filters affecting stickers. This bug was immediately fixed and all stickers were re-
moderated.
We have also finished a full investigation into our content moderation systems and processes
and have made specific changes to our process to ensure something like this does not happen
again.

THE CHANGES
After fixing the bug in our content moderation filters and confirming that the sticker was
successfully detected, we re-moderated our entire sticker library 4x.
We have also added another level of GIPHY moderation before each sticker is approved into
the library. This is now a permanent addition to our moderation process.
We hope this will ensure that GIPHY stickers will always be fun and safe no matter where you
see them.

THE FUTURE AND BEYOND
GIFs and Stickers are supposed to make the Internet a better, more entertaining place.
GIPHY is committed to making sure that’s always the case. As GIPHY continues to grow, we’re
going to continue looking for ways to improve our user experience. Please let us know how we
can help at: support@giphy.com.
Team Giphy.

 



https://ift.tt/2E5Cv0o

Facebook starts fact checking photos/videos, blocks millions of fake accounts per day

Facebook has begun letting partners fact check photos and videos beyond news articles, and proactively review stories before Facebook asks them. Facebook is also now preemptively blocking the creation of millions of fake accounts per day. Facebook revealed this news on a conference call with journalists [Update: and later a blog post] about its efforts around election integrity that included Chief Security Officer Alex Stamos who’s reportedly leaving Facebook later this year but claims he’s still committed to the company.

Articles flagged as false by Facebook’s fact checking partners have their reach reduced and display Related Articles showing perspectives from reputable news oulets below

Stamos outlined how Facebook is building ways to address fake identities, fake audiences grown illicitly or pumped up to make content appear more popular, acts of spreading false information, and false narratives that are intentionally deceptive and shape people’s views beyond the facts. “We’re trying to develop a systematic and comprehensive approach to tackle these challenges, and then to map that approach to the needs of each country or election” says Stamos.

Samidh Chakrabarti, Facebook’s product manager for civic engagement also explained that Facebook is now proactively looking for foreign-based Pages producing civic-related content inauthentically. It removes them from the platform if a manual review by the security team finds they violate terms of service.

“This proactive approach has allowed us to move more quickly and has become a really important way for us to prevent divisive or misleading memes from going viral” said Chakrabarti. Facebook first piloted this tool in the Alabama special election, but has now deployed it to protect Italian elections and will use it for the U.S. mid-term elections.

Meanwhile, advances in machine learning have allowed Facebook “to find more suspicious behaviors without assessing the content itself” to block millions of fake account creations per day “before they can do any harm”, says Chakrabarti.

Facebook implemented its first slew of election protections back in December 2016, including working with third-party fact checkers to flag articles as false. But those red flags were shown to entrench some people’s belief in false stories, leading Facebook to shift to showing Related Articles with perspectives from other reputable news outlets. As of yesterday, Facebook’s fact checking partners began reviewing suspicious photos and videos which can also spread false information. This could reduce the spread of false news image memes that live on Facebook and require no extra clicks to view, like doctored photos showing the Parkland school shooting survivor Emma González ripping up the constitution.

Normally, Facebook sends fact checkers stories that are being flagged by users and going viral. But now in countries like Italy and Mexico in anticipation of elections, Facebook has enabled fact checkers to proactively flag things because in some cases they can identify false stories that are spreading before Facebook’s own systems. “To reduce latency in advance of elections, we wanted to ensure we gave fact checkers that ability” says Facebook’s News Feed product manager Tessa Lyons.

A photo of Parkland shooting survivor Emma González ripping up a shooting range target was falsly doctored to show her ripping up the constitution. Photo fact checking could help Facebook prevent the false image from going viral. [Image via CNN]

With the mid-terms coming up quick, Facebook has to both secure its systems against election interference, as well as convince users and regulators that it’s made real progress since the 2016 presidential election where Russian meddlers ran rampant. Otherwise, Facebook risks another endless news cycle about it being a detriment to democracy that could trigger reduced user engagement and government intervention.

https://ift.tt/2usDecF

Amazon's next big thing could be checking accounts for teens

Not satisfied with being the biggest online retailer on the planet, Amazon has its sights set on a new product for a new demographic: checking accounts for teens.

Sources speaking with Bloomberg say Amazon is in early talks with banks about offering a checking account-like product to those too young to open one at a traditional bank without their parents' permission.

Amazon apparently plans to aim the accounts squarely at teens, having it appeal to that age group as well as people who don't have a credit card. Amazon Alexa could also feature in the product, further appealing to teens who are growing up with voice assistants.

Amazon has already waded into the financial service waters with Amazon Cash, which lets shoppers add physical cash to their Amazon account by visiting brick-and-mortar stores.

The new checking account product would be more streamlined than this, and offer an alternative to those who don't have bank accounts to begin with.

What are you buying?

One issue Amazon could run into is that teens prefer going to physical stores rather than shopping online. The age-old tradition of hanging out at the mall with your friends apparently hasn't faded in the digital age. 

An additional concern we see is the issue of privacy. Amazon would have access to a lot of personal information about young people before they turn 18, including, of course, their spending habits.

This would be good for Amazon's business, but raises the question of how much information we turn over to tech companies in the name of convenience.

There's no word yet on when Amazon's checking accounts would go live, but we'll keep you posted.

https://ift.tt/2GH7J3w

Twitter makes it easier to share the right part of a live video with launch of ‘Timestamps’

Twitter today is introducing a new feature that will make it easier to share a key moment from a live video, so those viewing the tweet don’t have to scroll to the part of the broadcast you want to talk about. The feature, called “Timestamps,” is something Twitter says it built in response to existing user behavior on Twitter.

Before, users could only tweet an entire live video. So, if they wanted to highlight a particular segment, they would tweet the video along with the specific time in the video where the part they’re trying to share begins.

Those viewing the tweet would then have to scroll through the video to the correct time, which can be cumbersome on longer broadcasts and challenging on slower connections.

For instance:

The new Timestamps feature makes this whole process simpler. Now, when you tap to share a live video (or a replay of a live video), you’re able to scroll back to the exact time you want the audience to watch. You can then add your own thoughts to the tweet, and post it as usual.

But anyone seeing the tweet will start watching right at the time you specified.

If the video is still live, they’ll then be able to skip to what’s happening now by clicking the “live” button, or they can scroll back and forward in the video as they choose.

The new option ties in well with Twitter’s live streaming efforts, which has seen the company focused on offering live-streamed sporting events, news broadcasts, and other events.

For example, those live-streaming a sports match could re-share the same live video broadcast every time the team scores a goal, with the video already positioned to the right part of the broadcast to capture that action. That could increase the video’s number of viewers, which could then translate to better advertising potential for those live streams.

However, Twitter will not allow advertisers to place their ads against the Timestamped moments at launch, because they don’t want to get into a situation where an advertiser is positioned up against a moment that’s not considered ‘brand-safe.’

Beyond the sports-focused use cases, people could also take advantage of Timestamps to share their favorite song from a live-streamed concert, while reporters could highlight something important said during a press conference.

Twitter notes the Timestamps feature will be available to anyone – not just professional content publishers. It will also work for anyone doing a broadcast from their phone, and will support live videos both on Twitter and Periscope.

On Twitter, you’ll be able to share the live video as a tweet, while on Periscope you’re  able to share to your Periscope followers, in addition to sharing to Twitter or sharing as a link.

Timestamps isn’t the first feature Twitter built by watching how people were using its product. The company has a long history of adapting its product to consumer behavior as it did with the previous launches of @ replies, the hashtag, retweets and, most recently, threads. 

The update that delivers support for Timestamps is rolling out today on Twitter for Android and iOS, Twitter.com and Periscope.

 



https://ift.tt/2GGPACP

Here’s Cambridge Analytica’s plan for voters’ Facebook data

More details have emerged about how Facebook data on millions of US voters was handled after it was obtained in 2014 by UK political consultancy Cambridge Analytica for building psychographic profiles of Americans to target election messages for the Trump campaign.

The dataset — of more than 50M Facebook users — is at the center of a scandal that’s been engulfing the social network giant since newspaper revelations published on March 17 dropped privacy and data protection into the top of the news agenda.

A UK parliamentary committee has published a cache of documents provided to it by an ex CA employee, Chris Wylie, who gave public testimony in front of the committee at an oral hearing earlier this week. During that hearing he said he believes data on “substantially” more than 50M Facebookers was obtained by CA. Facebook has not commented publicly on that claim.

Among the documents the committee has published today (with some redactions) is the data-licensing contract between Global Science Research (GSR) — the company set up by the Cambridge University professor, Aleksandr Kogan, whose personality test app was used by CA as the vehicle for gathering Facebook users’ data — and SCL Elections (an affiliate of CA), dated June 4, 2014.

The document is signed by Kogan and CA’s now suspended CEO, Alexander Nix.

The contract stipulates that all monies transferred to GSR will be used for obtaining and processing the data for the project — “to further develop, add to, refine and supplement GS psychometric scoring algorithms, databases and scores” — and none of the money paid Kogan should be spent on other business purposes, such as salaries or office space “unless otherwise approved by SCL”.

Wylie told the committee on Tuesday that CA chose to work with Kogan as he had agreed to work with them on acquiring and modeling the data first, without fixing commercial terms up front.

The contact also stipulates that Kogan’s company must gain “advanced written approval” from SCL to cover costs not associated with collecting the data — including “IT security”.

Which does rather underline CA’s priorities in this project: Obtain, as fast as possible, lots of personal data on US voters, but don’t worry much about keeping that personal information safe. Security is a backburner consideration in this contract.

CA responded to Wylie’s testimony on Tuesday with a statement rejecting his allegations — including claiming it “does not hold any GSR data or any data derived from GSR data”.

The company has not updated its press page with any new statement in light of the publication of a 2014 contract signed by its former CEO and GSR’s Kogan.

Earlier this week the committee confirmed that Nix has accepted its summons to return to give further evidence — saying the public session will likely to take place on April 17.

Voter modeling across 11 US States

The first section of the contract between the CA affiliate company and GSR briefly describes the purpose of the project as being to conduct “political modeling” of the population in 11 US states.

On the data protection front, the contract includes a clause stating that both parties “warrant and undertake” to comply with all relevant privacy and data handling laws.

“Each of the parties warrants and undertakes that it will not knowingly do anything or permit anything to be done which might lead to a breach of any such legislation, regulations and/or directives by the other party,” it also states.

CA remains under investigation by the UK’s data protection watchdog, which obtained a warrant to enter its offices last week — and spent several hours gathering evidence. The company’s activities are being looked at as part of a wider investigation by the ICO into the use of data analytics for political purposes.

Commissioner Elizabeth Denham has previously said she’s leading towards recommending a code of conduct for use of social media for political campaigning — and said she hopes to publish her report by May.

Another clause in the contract between GSR and SCL specifies that Kogan’s company will “seek out informed consent of the seed user engaging with GS Technology” — which would presumably refer to the ~270,000 people who agreed to take the personality quiz in the app deployed via Facebook’s platform.

Upon completion of the project, the contract specifies that Kogan’s company may continue to make use of SCL data for “academic research where no financial gain is made”.

Another clause details an additional research boon that would be triggered if Kogan was able to meet performance targets and deliver SCL with 2.1M matched records in the 11 US states it was targeting — so long as he met its minimum quality standards and at an averaged cost of $0.50 or less per matched record. In that event, he stood to also receive an SCL dataset of around 1M residents of Trinidad and Tobago — also “for use in academic research”.

The second section of the contract explains the project and its specification in detail.

Here it states that the aim of the project is “to infer psychological profiles”, using self-reported personality test data, political party preference and “moral value data”.

The 11 US states targeted by the project are also named as: Arkansas, Colorado, Florida, Iowa, Louisiana, Nevada, New Hampshire, North Carolina, Oregon, South Carolina and West Virginia.

The project is detailed in the contract as a seven step process — with Kogan’s company, GSR, generating an initial seed sample (though it does not specify how large this is here) using “online panels”; analyzing this seed training data using its own “psychometric inventories” to try to determine personality categories; the next step is Kogan’s personality quiz app being deployed on Facebook to gather the full dataset from respondents and also to scrape a subset of data from their Facebook friends (here it notes: “upon consent of the respondent, the GS Technology scrapes and retains the respondent’s Facebook profile and a quantity of data on that respondent’s Facebook friends”); step 4 involves the psychometric data from the seed sample, plus the Facebook profile data and friend data all being run through proprietary modeling algorithms — which the contract specifies are based on using Facebook likes to predict personality scores, with the stated aim of predicting the “psychological, dispositional and/or attitudinal facets of each Facebook record”; this then generates a series of scores per Facebook profile; step 6 is to match these psychometrically scored profiles with voter record data held by SCL — with the goal of matching (and thus scoring) at least 2M voter records for targeting voters across the 11 states; the final step is for matched records to be returned to SCL, which would then be in a position to craft messages to voters based on their modeled psychometric scores.

The “ultimate aim” of the psychometric profiling product Kogan built off of the training and Facebook data sets is imagined as “a ‘gold standard’ of understanding personality from Facebook profile information, much like charting a course to sail”.

The possibility for errors is noted briefly in the document but it adds: “Sampling in this phase [phase 1 training set] will be repeated until assumptions and distributions are met.”

In a later section, on demographic distribution analysis, the contract mentions the possibility for additional “targeted data collection procedures through multiple platforms” to be used — even including “brief phone scripts with single-trait questions” — in order to correct any skews that might be found once the Facebook data is matched with voter databases in each state, (and assuming any “data gaps” could not be “filled in from targeted online samples”, as it also puts it).

In a section on “background and rational”, the contract states that Kogan’s models have been “validity tested” on users who were not part of the training sample, and further claims: “Trait predictions based on Facebook likes are at near test-rest levels and have been compared to the predictions their romantic partners, family members, and friends make about their traits”.

“In all the previous cases, the computer-generated scores performed the best. Thus, the computer-generated scores can be more accurate than even the knowledge of very close friends and family members,” it adds.

His technology is described as “different from most social research measurement instruments” in that it is not solely based on self-reported data — with the follow-on claim being made that: “Using observed data from Facebook users’ profiles makes GS’ measurements genuinely behavioral.”

That suggestion, at least, seems fairly tenuous — given that a portion of Facebook users are undoubtedly aware that the site is tracking their activity when they use it, which in turn is likely to affect how they use Facebook.

So the idea that Facebook usage is a 100% naked reflection of personality deserves far more critical questioning than Kogan’s description of it in the contract with SCL.

And, indeed, some of the commentary around this news story has queried the value of the entire exposé by suggesting CA’s psychometric targeting wasn’t very effective — ergo, it may not have had a significant impact on the US election.

In contrast to claims being made for his technology in the 2014 contract, Kogan himself claimed in a TV interview earlier this month (after the scandal broke) that his predictive modeling was not very accurate at an individual level — suggesting it would only be useful in aggregate to, for example, “understand the personality of New Yorkers”.

Yesterday Channel 4 News reported that it had been able to obtain some of the data Kogan modeled for CA — supporting Wylie’s testimony that CA had not locked down access to the data.

In its report, the broadcaster spoke to some of the named US voters in Colorado — showing them the scores Kogan’s models had given them. Unsurprisingly, not all their interviewees thought the scores were an accurate reflection of who they were.

However regardless of how effective (or not) Kogan’s methods were, the bald fact that personal information on 50M+ Facebook users was so easily sucked out of the platform is of unquestionable public interest and concern.

The added fact this data set was used for psychological modeling for political message targeting purposes without people’s knowledge or consent just further underlines the controversy. Whether the political microtargeting method worked well or was hit and miss is really by the by.

In the contract, Kogan’s psychological profiling methods are described as “less costly, more detailed, and more quickly collected” than other individual profiling methods, such as “standard political polling or phone samples”.

The contract also flags up how the window of opportunity for his approach was closing — at least on Facebook’s platform. “GS’s method relies on a pre-existing application functioning under Facebook’s old terms of service,” it observes. “New applications are not able to access friend networks and no other psychometric profiling applications exist under the old Facebook terms.”

As I wrote last weekend, Facebook faced a legal challenge to the lax system of app permissions it operated in 2011. And after a data protection audit and re-audit by the Irish Data Protection Commissioner, in 2011 and 2012, the regulator recommended it shutter developers’ access to friend networks — which Facebook finally did (for both old and new apps) as of mid 2015.

But in mid 2014 existing developers on its platform could still access the data — as Kogan was able to, handing it off to SCL and its affiliates.

Other documents published by the committee today include a contract between Aggregate IQ — a Canadian data company which Wylie described to the committee as ‘CA Canada’ (aka yet another affiliate of CA/SCL, and SCL Elections).

This contract, which is dated September 15, 2014, is for the: “Design and development of an Engagement Platform System”, also referred to as “the Ripon Platform”, and described as: “A scalable engagement platform that leverages the strength of SCLs modelling data, providing an actionable toolset and dashboard interface for the target campaigns in the 2014 election cycle. This will consist of a bespoke engagement platform (SCL Engage) to help make SCLs behavioural microtargeting data actionable while making campaigns more accountable to donors and supporter”.

Another contract between Aggregate IQ and SCL is dated November 25, 2013, and covers the delivery of a CRM system, a website and “the acquisition of online data” for a political party in Trinidad and Tobago. In this contract a section on “behavioral data acquisition” details their intentions thus:

  • Identify and obtain qualified sources of data that illustrate user behaviour and contribute to the development of psychographic profiling in the region

  • This data may include, but is not limited to:

    • Internet Service Provider (ISP) log files

    • First party data logs

    • Third party data logs

    • Ad network data

    • Social bookmarking

    • Social media sharing (Twitter, FB, MySpace)

    • Natural Language Processing (NLP) of URL text and images

    • Reconciliation of IP and User-Agent to home address, census tract, or dissemination area

In his evidence to the committee on Tuesday Wylie described the AIQ Trinidad project as a “pre-cursor to the Rippon project to see how much data could be pulled and could we profile different attributes in people”.

He also alleged AIQ has used hacker type techniques to obtain data. “AIQ’s role was to go and find data,” he told the committee. “The contracting is pulling ISP data and there’s also emails that I’ve passed on to the committee where AIQ is working with SCL to find ways to pull and then de-anonymize ISP data. So, like, raw browsing data.”

Another document in the bundle published today details a project pitch by SCL to carry out $200,000 worth of microtargeting and political campaign work for the conservative organization ForAmerica.org — for “audience building and supporter mobilization campaigns”.

There is also an internal SCL email chain regarding a political targeting project that also appears to involve the Kogan modeled Facebook data, which is referred to as the “Bolton project” (which seems to refer to work done for the now US national security advisor, John Bolton) — with some back and forth over concerns about delays and problems with data matching in some of the US states and overall data quality.

“Need to present the little information we have on the 6,000 seeders to [sic] we have to give a rough and ready and very preliminary reading on that sample ([name redacted] will have to ensure the appropriate disclaimers are in place to manage their expectations and the likelihood that the results will change once more data is received). We need to keep the client happy,” is one of the suggested next steps in an email written by an unidentified SCL staffer working on the Bolton project.

“The Ambassador’s team made it clear that he would want some kind of response on the last round of foreign policy questions. Though not ideal, we will simply piss off a man who is potentially an even bigger client if we remain silent on this because it has been clear to us this is something he is particularly interested in,” the emailer also writes.

“At this juncture, we unfortunately don’t have the luxury of only providing the perfect data set but must deliver something which shows the validity of what we have been promising we can do,” the emailer adds.

Another document is a confidential memorandum prepared for Rebekah Mercer (the daughter of US billionaire Robert Mercer; Wylie has said Mercer provided the funding to set up CA), former Trump advisor Steve Bannon and the (now suspended) CA CEO Alexander Nix advising them on the legality of a foreign corporation (i.e. CA), and foreign nationals (such as Nix and others), carrying out work on US political campaigns.

This memo also details the legal structure of SCL and CA — the former being described as a “minority owner” of CA. It notes:

With this background we must look first at Cambridge Analytica, LLC (“Cambridge”) and then at the people involved and the contemplated tasks. As I understand it, Cambridge is a Delaware Limited Liability Company that was formed in June of 2014. It is operated through 5 managers, three preferred managers, Ms. Rebekah Mercer, Ms. Jennifer Mercer and Mr. Stephen Bannon, and two common managers, Mr. Alexander Nix and a person to be named. The three preferred managers are all United States citizens, Mr. Nix is not. Cambridge is primarily owned and controlled by US citizens, with SCL Elections Ltd., (“SCL”) a UK limited company being a minority owner. Moreover, certain intellectual property of SCL was licensed to Cambridge, which intellectual property Cambridge could use in its work as a US company in US elections, or other activities.

On the salient legal advice point, the memo concludes that US laws prohibiting foreign nationals managing campaigns — “including making direct or indirect decisions regarding the expenditure of campaign dollars” — will have “a significant impact on how Cambridge hires staff and operates in the short term”.



https://ift.tt/2uv3piT

Here’s Cambridge Analytica’s plan for voters’ Facebook data

More details have emerged about how Facebook data on millions of US voters was handled after it was obtained in 2014 by UK political consultancy Cambridge Analytica for building psychographic profiles of Americans to target election messages for the Trump campaign.

The dataset — of more than 50M Facebook users — is at the center of a scandal that’s been engulfing the social network giant since newspaper revelations published on March 17 dropped privacy and data protection into the top of the news agenda.

A UK parliamentary committee has published a cache of documents provided to it by an ex CA employee, Chris Wylie, who gave public testimony in front of the committee at an oral hearing earlier this week. During that hearing he said he believes data on “substantially” more than 50M Facebookers was obtained by CA. Facebook has not commented publicly on that claim.

Among the documents the committee has published today (with some redactions) is the data-licensing contract between Global Science Research (GSR) — the company set up by the Cambridge University professor, Aleksandr Kogan, whose personality test app was used by CA as the vehicle for gathering Facebook users’ data — and SCL Elections (an affiliate of CA), dated June 4, 2014.

The document is signed by Kogan and CA’s now suspended CEO Alexander Nix.

The contract stipulates that all monies transferred to GSR will be used for obtaining and processing the data for the project — “to further develop, add to, refine and supplement GS psychometric scoring algorithms, databases and scores” — and none of the money paid Kogan should be spent on other business purposes, such as salaries or office space “unless otherwise approved by SCL”.

Wylie told the committee on Tuesday that CA chose to work with Kogan as he had agreed to work with them on acquiring and modeling the data first, without fixing commercial terms up front.

The contact also stipulates that Kogan’s company must gain “advanced written approval” from SCL to cover costs not associated with collecting the data — including “IT security”.

Which does rather underline CA’s priorities in this project: Obtain, as fast as possible, lots of personal data on US voters, but don’t worry much about keeping that personal information safe. Security is a backburner consideration in this contract.

CA responded to Wylie’s testimony on Tuesday with a statement rejecting his allegations — including claiming it “does not hold any GSR data or any data derived from GSR data”.

The company has not updated its press page with any new statement in light of the publication of a 2014 contract signed by its former CEO and GSR’s Kogan.

Earlier this week the committee confirmed that Nix has accepted its summons to return to give further evidence — saying the public session will likely to take place on April 17.

Voter modeling across 11 US States

The first section of the contract between the CA affiliate company and GSR briefly describes the purpose of the project as being to conduct “political modeling” of the population in 11 US states.

On the data protection front, the contract includes a clause stating that both parties “warrant and undertake” to comply with all relevant privacy and data handling laws.

“Each of the parties warrants and undertakes that it will not knowingly do anything or permit anything to be done which might lead to a breach of any such legislation, regulations and/or directives by the other party,” it also states.

CA remains under investigation by the UK’s data protection watchdog, which obtained a warrant to enter its offices last week — and spent several hours gathering evidence. The company’s activities are being looked at as part of a wider investigation by the ICO into the use of data analytics for political purposes.

Commissioner Elizabeth Denham has previously said she’s leading towards recommending a code of conduct for use of social media for political campaigning — and said she hopes to publish her report by May.

Another clause in the contract between GSR and SCL specifies that Kogan’s company will “seek out informed consent of the seed user engaging with GS Technology” — which would presumably refer to the ~270,000 people who agreed to take the personality quiz in the app deployed via Facebook’s platform.

Upon completion of the project, the contract specifies that Kogan’s company may continue to make use of SCL data for “academic research where no financial gain is made”.

Another clause details an additional research boon that would be triggered if Kogan was able to meet performance targets and deliver SCL with 2.1M matched records in the 11 US states it was targeting — so long as he met its minimum quality standards and at an averaged cost of $0.50 or less per matched record. In that event, he stood to also receive an SCL dataset of around 1M residents of Trinidad and Tobago — also “for use in academic research”.

The second section of the contract explains the project and its specification in detail.

Here it states that the aim of the project is “to infer psychological profiles”, using self-reported personality test data, political party preference and “moral value data”.

The 11 US states targeted by the project are also named as: Arkansas, Colorado, Florida, Iowa, Louisiana, Nevada, New Hampshire, North Carolina, Oregon, South Carolina and West Virginia.

The project is detailed in the contract as a seven step process — with Kogan’s company, GSR, generating an initial seed sample (though it does not specify how large this is here) using “online panels”; analyzing this seed training data using its own “psychometric inventories” to try to determine personality categories; the next step is Kogan’s personality quiz app being deployed on Facebook to gather the full dataset from respondents and also to scrape a subset of data from their Facebook friends (here it notes: “upon consent of the respondent, the GS Technology scrapes and retains the respondent’s Facebook profile and a quantity of data on that respondent’s Facebook friends”); step 4 involves the psychometric data from the seed sample, plus the Facebook profile data and friend data all being run through proprietary modeling algorithms — which the contract specifies are based on using Facebook likes to predict personality scores, with the stated aim of predicting the “psychological, dispositional and/or attitudinal facets of each Facebook record”; this then generates a series of scores per Facebook profile; step 6 is to match these psychometrically scored profiles with voter record data held by SCL — with the goal of matching (and thus scoring) at least 2M voter records for targeting voters across the 11 states; the final step is for matched records to be returned to SCL, which would then be in a position to craft messages to voters based on their modeled psychometric scores.

The “ultimate aim” of the psychometric profiling product Kogan built off of the training and Facebook data sets is imagined as “a ‘gold standard’ of understanding personality from Facebook profile information, much like charting a course to sail”.

The possibility for errors is noted briefly in the document but it adds: “Sampling in this phase [phase 1 training set] will be repeated until assumptions and distributions are met.”

In a later section, on demographic distribution analysis, the contract mentions the possibility for additional “targeted data collection procedures through multiple platforms” to be used — even including “brief phone scripts with single-trait questions” — in order to correct any skews that might be found once the Facebook data is matched with voter databases in each state, (and assuming any “data gaps” could not be “filled in from targeted online samples”, as it also puts it).

In a section on “background and rationale”, the contract states that Kogan’s models have been “validity tested” on users who were not part of the training sample, and further claims: “Trait predictions based on Facebook likes are at near test-rest levels and have been compared to the predictions their romantic partners, family members, and friends make about their traits”.

“In all the previous cases, the computer-generated scores performed the best. Thus, the computer-generated scores can be more accurate than even the knowledge of very close friends and family members,” it adds.

His technology is described as “different from most social research measurement instruments” in that it is not solely based on self-reported data — with the follow-on claim being made that: “Using observed data from Facebook users’ profiles makes GS’ measurements genuinely behavioral.”

That suggestion, at least, seems fairly tenuous — given that a portion of Facebook users are undoubtedly aware that the site is tracking their activity when they use it, which in turn is likely to affect how they use Facebook.

So the idea that Facebook usage is a 100% naked reflection of personality deserves far more critical questioning than is implied by Kogan’s description of it in the contract with SCL.

And, indeed, some of the commentary around this news story has queried the value of the entire exposé by suggesting CA’s psychometric targeting wasn’t very effective — ergo, it may not have had a significant impact on the US election.

In contrast to claims being made for his technology in the 2014 contract, Kogan himself claimed in a TV interview earlier this month (after the scandal broke) that his predictive modeling was not very accurate at an individual level — suggesting it would only be useful in aggregate to, for example, “understand the personality of New Yorkers”.

Yesterday Channel 4 News reported that it had been able to obtain some of the data Kogan modeled for CA — thereby supporting Wylie’s testimony that CA had not locked down access to the data. And in its report, the broadcaster spoke to some of the named US voters in Colorado — showing them the scores Kogan’s models had given them.

Unsurprisingly, not all their interviewees thought the scores were an accurate reflection of who they were.

However regardless of how effective (or not) Kogan’s methods were, the bald fact that personal information on 50M+ Facebook users was so easily sucked out of the platform is of unquestionable public interest and concern.

The added fact this data set was used for psychological modeling for political message targeting purposes without people’s knowledge or consent just further underlines the controversy. Whether the political microtargeting method worked well or was hit and miss is really by the by.

In the contract, Kogan’s psychological profiling methods are described as “less costly, more detailed, and more quickly collected” than other individual profiling methods, such as “standard political polling or phone samples”.

The contract also flags up how the window of opportunity for his approach was closing — at least on Facebook’s platform. “GS’s method relies on a pre-existing application functioning under Facebook’s old terms of service,” it observes. “New applications are not able to access friend networks and no other psychometric profiling applications exist under the old Facebook terms.”

As I wrote last weekend, Facebook faced a legal challenge to the lax system of app permissions it operated in 2011. And after a data protection audit and re-audit by the Irish Data Protection Commissioner, in 2011 and 2012, the regulator recommended it shutter developers’ access to friend networks — which Facebook finally did (for both old and new apps) as of mid 2015.

But in mid 2014 existing developers on its platform could still access the data — as Kogan was able to, handing it off to SCL and its affiliates.

Other documents published by the committee today include a contract between Aggregate IQ — a Canadian data company which Wylie described to the committee as ‘CA Canada’ (aka yet another affiliate of CA/SCL, and SCL Elections).

This contract, which is dated September 15, 2014, is for the: “Design and development of an Engagement Platform System”, also referred to as “the Ripon Platform”, and described as: “A scalable engagement platform that leverages the strength of SCLs modelling data, providing an actionable toolset and dashboard interface for the target campaigns in the 2014 election cycle. This will consist of a bespoke engagement platform (SCL Engage) to help make SCLs behavioural microtargeting data actionable while making campaigns more accountable to donors and supporter”.

Another contract between Aggregate IQ and SCL is dated November 25, 2013, and covers the delivery of a CRM system, a website and “the acquisition of online data” for a political party in Trinidad and Tobago. In this contract a section on “behavioral data acquisition” details their intentions thus:

  • Identify and obtain qualified sources of data that illustrate user behaviour and contribute to the development of psychographic profiling in the region

  • This data may include, but is not limited to:

    • Internet Service Provider (ISP) log files

    • First party data logs

    • Third party data logs

    • Ad network data

    • Social bookmarking

    • Social media sharing (Twitter, FB, MySpace)

    • Natural Language Processing (NLP) of URL text and images

    • Reconciliation of IP and User-Agent to home address, census tract, or dissemination area

In his evidence to the committee on Tuesday Wylie described the AIQ Trinidad project as a “pre-cursor to the Rippon project to see how much data could be pulled and could we profile different attributes in people”.

He also alleged AIQ has used hacker type techniques to obtain data. “AIQ’s role was to go and find data,” he told the committee. “The contracting is pulling ISP data and there’s also emails that I’ve passed on to the committee where AIQ is working with SCL to find ways to pull and then de-anonymize ISP data. So, like, raw browsing data.”

Another document in the bundle published today details a project pitch by SCL to carry out $200,000 worth of microtargeting and political campaign work for the conservative organization ForAmerica.org — for “audience building and supporter mobilization campaigns”.

There is also an internal SCL email chain regarding a political targeting project that also appears to involve the Kogan modeled Facebook data, which is referred to as the “Bolton project” (which seems to refer to work done for the now US national security advisor, John Bolton) — with some back and forth over concerns about delays and problems with data matching in some of the US states and overall data quality.

“Need to present the little information we have on the 6,000 seeders to [sic] we have to give a rough and ready and very preliminary reading on that sample ([name redacted] will have to ensure the appropriate disclaimers are in place to manage their expectations and the likelihood that the results will change once more data is received). We need to keep the client happy,” is one of the suggested next steps in an email written by an unidentified SCL staffer working on the Bolton project.

“The Ambassador’s team made it clear that he would want some kind of response on the last round of foreign policy questions. Though not ideal, we will simply piss off a man who is potentially an even bigger client if we remain silent on this because it has been clear to us this is something he is particularly interested in,” the emailer also writes.

“At this juncture, we unfortunately don’t have the luxury of only providing the perfect data set but must deliver something which shows the validity of what we have been promising we can do,” the emailer adds.

Another document is a confidential memorandum prepared for Rebekah Mercer (the daughter of US billionaire Robert Mercer; Wylie has said Mercer provided the funding to set up CA), former Trump advisor Steve Bannon and the (now suspended) CA CEO Alexander Nix advising them on the legality of a foreign corporation (i.e. CA), and foreign nationals (such as Nix and others), carrying out work on US political campaigns.

This memo also details the legal structure of SCL and CA — the former being described as a “minority owner” of CA. It notes:

With this background we must look first at Cambridge Analytica, LLC (“Cambridge”) and then at the people involved and the contemplated tasks. As I understand it, Cambridge is a Delaware Limited Liability Company that was formed in June of 2014. It is operated through 5 managers, three preferred managers, Ms. Rebekah Mercer, Ms. Jennifer Mercer and Mr. Stephen Bannon, and two common managers, Mr. Alexander Nix and a person to be named. The three preferred managers are all United States citizens, Mr. Nix is not. Cambridge is primarily owned and controlled by US citizens, with SCL Elections Ltd., (“SCL”) a UK limited company being a minority owner. Moreover, certain intellectual property of SCL was licensed to Cambridge, which intellectual property Cambridge could use in its work as a US company in US elections, or other activities.

On the salient legal advice point, the memo concludes that US laws prohibiting foreign nationals managing campaigns — “including making direct or indirect decisions regarding the expenditure of campaign dollars” — will have “a significant impact on how Cambridge hires staff and operates in the short term”.

https://ift.tt/2uv3piT