Monday 30 April 2018

Twitter announces new video partnerships with NBCUniversal and ESPN

Twitter is hosting its Digital Content NewFronts tonight, where it’s unveiling 30 renewals and new content deals — the company says that’s nearly twice as many as it announced last year.

Those include partnerships with the big players in media — starting with NBCUniversal, which will be sharing live video and clips from properties including NBC News, MSNBC, CNBC and Telemundo.

Twitter also announced some of the shows it will be airing as part of the ESPN deal announced earlier today: SportsCenter Live (a Twitter version of the network’s flagship) and Fantasy Focus Live (a livestream of the fantasy sports podcast).

Plus, the company said it’s expanding its existing partnership with Viacom with shows like Comedy Central’s Creator’s Room, BET Breaks and MTV News.

During the NewFronts event, Twitter’s head of video Kayvon Beykpour said daily video views on the platform have nearly doubled in the past year. And Kay Madati (pictured above), the company’s head of content partnerships, described the company as “the ultimate mobile platform where video and conversation share the same screen.”

As Twitter continues to invest in video content, it’s been emphasizing its advantage in live video, a theme that continued in this year’s announcement.

“Twitter is the only place where conversation is tied to video and the biggest live moments, giving brands the unique ability to connect with leaned in consumers who are shaping culture,” said Twitter Global VP of Revenue and Content Partnerships Matthew Derella in a statement. “That’s our superpower.”

During the event, Derella also (implicitly) contrasted Twitter with other digital platforms that have struggled with questions about transparency and whether ads are running in an appropriate environment. Tonight, he said marketers could say goodbye to unsafe brand environments and a lack of transparency: “And we say hello to you being in control of where your video aligns … we say hello to a higher measure of transparency, we say hello to new premium inventory and a break from the same old choices.”

On top of all the new content, Twitter is also announcing new ad programs. There are Creator Originals, a set of scripted series from influencers who will be paired up with sponsored brands. (The program is powered by Niche, the influencer marketing startup that Twitter acquired a few years ago.) And there’s a new Live Brand Studio — as the name suggests, it’s a team that works with marketers to create live video.

Here are some other highlights from the content announcements:

  • CELEBrate, a series where people get heartwarming messages from their idols from Ellen Digital Studios.
  • Delish Food Day and IRL from Heart Magazines Digital Media
  • Power Star Live, which is “inspired by the cultural phenomenon of Black Twitter” and livestreamed from he Atlanta University Center, from Will Packer Media.
  • BuzzFeed News is renewing AM to DM until the end of 2018.
  • Pattern, a new brand focused on weather- and science-related news.
  • Programming the Huffington Post (which, like TechCrunch, is owned by Verizon/Oath), History, Vox and BuzzFeed News that highlights women around the world.

Developing



https://ift.tt/2ji2tWE

WhatsApp CEO Jan Koum quits Facebook due to privacy intrusions

“It is time for me to move on . . . I’m taking some time off to do things I enjoy outside of technology, such as collecting rare air-cooled Porsches, working on my cars and playing ultimate frisbee” WhatsApp co-founder, CEO, and Facebook board member Jan Koum wrote today. The announcement followed shortly after The Washington Post reported that Koum would leave due to disagreements with Facebook management about WhatsApp user data privacy and weakened encryption. Koum obscured that motive in his note that says “I’ll still be cheering WhatsApp on – just from the outside.”

Facebook CEO Mark Zuckerberg quickly commented on Koum’s Facebook post about his departure, writing “Jan: I will miss working so closely with you. I’m grateful for everything you’ve done to help connect the world, and for everything you’ve taught me, including about encryption and its ability to take power from centralized systems and put it back in people’s hands. Those values will always be at the heart of WhatsApp.” That comment further tries to downplay the idea that Facebook pushed Koum away by trying to erode encryption.

It’s currently unclear who will replace Koum as WhatsApp’s CEO, and what will happen to his Facebook board seat.

Values Misaligned

Koum sold WhatsApp to Facebook for in 2014 for a jaw-dropping $19 billion. But since then it’s more than tripled its user count to 1.5 billion, making the price to turn messaging into a one-horse race seem like a steal. But at the time, Koum and co-founder Brian Acton were assured that WhatsApp wouldn’t have to run ads or merge its data with Facebook’s. So were regulators in Europe where WhatsApp is most popular.

A year and a half later, though, Facebook pressured WhatsApp to change its terms of service and give users’ phone numbers to its parent company. That let Facebook target those users with more precise advertising, such as by letting businesses upload list of phone numbers to hit those people with promotions. Facebook was eventually fined $122 million by the European Union in 2017 — a paltrey sum for a company earning over $4 billion in profit per quarter.

But the perceived invasion of WhatsApp user privacy drove a wedge between Koum and the parent company. Acton left Facebook in November, and has publicly supported the #DeleteFacebook movement since.

WashPo writes that Koum was also angered by Facebook executives pushing for a weakening of WhatsApp’s end-to-end encryption in order to facilitate its new WhatsApp For Business program. It’s possible that letting multiple team members from a business all interact with its WhatsApp account could be incompatible with strong encryption. Facebook plans to finally make money off WhatsApp by offering bonus services to big companies like airlines, e-commerce sites, and banks that want to conduct commerce over the chat app.

Jan Koum, the CEO and co-founder of WhatsApp speaks at the Digital Life Design conference on January 18, 2016, in Munich, south Germany.
On the Innovation Conference high-profile guests discuss for three days on trends and developments relating to the digitization. (Photo: TOBIAS HASE/AFP/Getty Images)

Koum was heavily critical of advertising in apps, once teling Forbes that “Dealing with ads is depressing . . . You don’t make anyone’s life better by making advertisements work better.” He vowed to keep them out of WhatsApp. But over the past year, Facebook has rolled out display ads in the Messenger inbox. Without Koum around, Facebook might push to expand those obtrusive ads to WhatsApp as well.

The high-profile departure comes at a vulnerable time for Facebook, with its big F8 developer conference starting tomorrow despite Facebook simultaneously shutting down parts of its dev platform as penance for the Cambridge Analytica scandal. Meanwhile, Google is trying to fix its fragmented messaging strategy, ditching apps like Allo to focus on a mobile carrier-backed alternative to SMS it’s building into Android Messages.

While the News Feed made Facebook rich, it also made it the villain. Messaging has become its strongest suit thanks to the dual dominance of Messenger and WhatsApp. Considering many users surely don’t even realize WhatsApp is own by Facebook, Koum’s departure over policy concerns isn’t likely to change that. But it’s one more point in what’s becoming a thick line connecting Facebook’s business ambitions to its cavalier approach to privacy.

You can read Koum’s full post below.

It's been almost a decade since Brian and I started WhatsApp, and it's been an amazing journey with some of the best…

Posted by Jan Koum on Monday, April 30, 2018



https://ift.tt/2jigy6r

Video: Larry Harvey and JP Barlow on Burning Man and tech culture

Larry Harvey, founder of the counterculture festival Burning Man, passed away this weekend. He was 70.

Harvey created a movement and contributed to the flowering both of counter-culture and, ultimately, of tech culture.

Both he and John Perry Barlow, who also passed in February this year after a long period of ill health, were huge advocates of free speech. Barlow wrote lyrics for the Grateful Dead, and then became a digital rights activist in later life.

In 2013 I caught up with both of them and recorded a joint 24 minute interview, just a short walk from the venue for the ‘Le Web London’ conference.

Amid the street noise and the traffic, they discussed some of the intellectual underpinnings of startup entrepreneurship and its parallels with Burning Man, in what might have been their first-ever joint interview.

We went over early computer culture, and how there was a “revolutionary zeal in the notion of intellectual empowerment” in Psychedelia which found common cause in tech culture.

We present for you once again, this iconic interview, in memory of these great men.

https://ift.tt/2HDAMpJ

Covee uses blockchain to allow experts worldwide to collaborate

Solving complex data-driven problems requires a lot of teamwork. But, of course, teamwork is typically restricted to companies where everyone is working under there same roof. While distributed teams have become commonplace in tech startups, taking that to the next level by linking up disparate groups of people all working on the same problem (but not in the same company) has been all but impossible. However, in theory, you could use a blockchain to do such a thing, where the work generated was constantly accounted for on-chain.

That’s in theory. In practice, there’s now a startup that claims to have come up with this model. And it’s raised funding.

Covee, a startup out of Berlin has raised a modest EUR 1.35m, in a round led by LocalGlobe in London with Atlantic Labs in Berlin and a selection of Angels. Prior to this, the company was bootstrapped by CEO Dr Marcel Dietsch, who left his job at a London-based hedge fund, and his long-time friend, Dr Raphael Schoettler, COO, who had previously worked for Deutsche Bank. They are joined by Dr Jochen Krause, CTO, an early blockchain investor and bitcoin miner, and former quant developer and data scientist, respectively, at Scalable Capital and Valora.

What sort of things could this platform be used for? Well, it could be used to bring together people to use machine learning algorithms to improve cancer diagnosis through tumor detection, or perhaps develop a crypto trading algorithm.

There are obvious benefits to the work of scientists. They could work more flexibly, access a more diverse range of projects, choose their teammates, and have their work reviewed by peers.

The platform also means you could be rewarded fairly for your contribution.

The upside for corporates is that they can use distributed workers where there is no middleman platform to pay, no management consultancy fees, and access a talent pool (data engineers, statisticians, domain experts) which is difficult to bring inside the firm.

Now, there are indeed others doing this including Aragon (decentralised governance for everything), Colony (teamwork for everything), and Upwork (freelance jobs platform individuals). All are different and have their limitations of course.

Covee plans to make money by having users pay a transaction fee for using the network infrastructure. They plan to turn this into a fully open-source decentralised network, with this transaction fee attached. But Covee will also offer this as a service if clients prefer not to deal with blockchain tokens and the platform directly.

Dietsch says: “Covee was founded in the first half of 2017 in Berlin and relocated to Zurich, Switzerland late 2017 where we incorporated Covee Network. Moving to Switzerland was important for us because it has one of the oldest and strongest blockchain ecosystems in the world and an excellent pipeline of talent from institutions such as ETH Zurich and the University of Zurich. The crypto-friendly stance of the country means it has all the necessary infrastructure as well as clear regulations for token economies.”

https://ift.tt/2w01ZxL

Google and NBC team up to create a VR content you might actually watch

NBC wants to brings some of you to step foot on the set of your favorite shows like Saturday Night Live and Vanderpump Rules. Through the help of VR - and a partnership with Google - they’re about to do just that.

The deal inked between the two companies is one that involves 10 ‘multi-episode VR productions’ that will be filmed using Google’s VR Jump platform and posted to YouTube where you can watch them for free.

Right now, these 10 VR productions are being shot in normal VR - the kind that can be viewed using Google Cardboard, Daydream View or Samsung Gear VR - but NBC says that select future productions will be shot in VR180, a new VR format providing 4K, three-dimensional video. 

The first set of shows to get the VR treatment are from Bravo’s Vanderpump set and include a 360 tour of Lisa Vanderpump’s recently opened Hollywood dog rescue center, Vanderpump Dogs, as well as an exclusive bonus clip from the season finale of “Vanderpump Rules.”  

Are true VR TV shows on the horizon? 

Right now it appears as though the partnership is just producing additional content in VR, rather than dedicating the resources necessary to create actual shows for the VR audience - an area that’s severely lacking at the moment.

The silver lining here is that, if enough people tune into the VR videos, NBC and Google might feel convinced to create bespoke series for VR devices. 

“We are constantly looking for opportunities to bring consumers new ways to experience content from across the NBCUniversal portfolio,” said Ron Lamprecht, Executive Vice President, NBCUniversal Digital Enterprises in a press release sent to TechRadar.

Does that means there's a chance that partnership might open the doors to full-on 360-degree TV shows? Maybe! 

https://ift.tt/2jkJ1IS

Facebook is trying to block Schrems II privacy referral to EU top court

Facebook’s lawyers are attempting to block a High Court decision in Ireland, where its international business is headquartered, to refer a long-running legal challenge to the bloc’s top court.

The social media giant’s lawyers asked the court to stay the referral to the CJEU today, Reuters reports. Facebook is trying to appeal the referral by challenging Irish case law — and wants a stay granted in the meanwhile.

The case relates to a complaint filed by privacy campaigner and lawyer Max Schrems regarding a transfer mechanism that’s currently used by thousands of companies to authorize flows of personal data on EU citizens to the US for processing. Though Schrems was actually challenging the use of so-called Standard Contractual Clauses (SCCs) by Facebook, specifically, when he updated an earlier complaint on the same core data transfer issue — which relates to US government mass surveillance practices, as revealed by the 2013 Snowden disclosures — with Ireland’s data watchdog.

However the Irish Data Protection Commissioner decided to refer the issue to the High Court to consider the legality of SCCs as a whole. And earlier this month the High Court decided to refer a series questions relating to EU-US data transfers to Europe’s top court — seeking a preliminary ruling on a series of fundamental questions that could even unseat another data transfer mechanism, called the EU-US Privacy Shield, depending on what CJEU judges decide.

An earlier legal challenge by Schrems — which was also related to the clash between US mass surveillance programs (which harvest data from social media services) and EU fundamental rights (which mandate that web users’ privacy is protected) — resulted in the previous arrangement for transatlantic data flows being struck down by the CJEU in 2015, after standing for around 15 years.

Hence the current case being referred to by privacy watchers as ‘Schrems II’. You can also see why Facebook is keen to delay another CJEU referral if it can.

According to comments made by Schrems on Twitter the Irish High Court reserved judgement on Facebook’s request today, with a decision expected within a week…

Facebook’s appeal is based on trying to argue against Irish case law — which Schrems says does not allow for an appeal against such a referral, hence he’s couching it as another delaying tactic by the company:

We reached out to Facebook for comment on the case. At the time of writing it had not responded.

In a statement from October, after an earlier High Court decision on the case, Facebook said:

Standard Contract Clauses provide critical safeguards to ensure that Europeans’ data is protected once transferred to companies that operate in the US or elsewhere around the globe, and are used by thousands of companies to do business. They are essential to companies of all sizes, and upholding them is critical to ensuring the economy can continue to grow without disruption.

This ruling will have no immediate impact on the people or businesses who use our services. However it is essential that the CJEU now considers the extensive evidence demonstrating the robust protections in place under Standard Contractual Clauses and US law, before it makes any decision that may endanger the transfer of data across the Atlantic and around the globe.

https://ift.tt/2r8Wqag

Facebook is trying to block Schrems II privacy referral to EU top court

Facebook’s lawyers are attempting to block a High Court decision in Ireland, where its international business is headquartered, to refer a long-running legal challenge to the bloc’s top court.

The social media giant’s lawyers asked the court to stay the referral to the CJEU today, Reuters reports. Facebook is trying to appeal the referral by challenging Irish case law — and wants a stay granted in the meanwhile.

The case relates to a complaint filed by privacy campaigner and lawyer Max Schrems regarding a transfer mechanism that’s currently used by thousands of companies to authorize flows of personal data on EU citizens to the US for processing. Though Schrems was actually challenging the use of so-called Standard Contractual Clauses (SCCs) by Facebook, specifically, when he updated an earlier complaint on the same core data transfer issue — which relates to US government mass surveillance practices, as revealed by the 2013 Snowden disclosures — with Ireland’s data watchdog.

However the Irish Data Protection Commissioner decided to refer the issue to the High Court to consider the legality of SCCs as a whole. And earlier this month the High Court decided to refer a series questions relating to EU-US data transfers to Europe’s top court — seeking a preliminary ruling on a series of fundamental questions that could even unseat another data transfer mechanism, called the EU-US Privacy Shield, depending on what CJEU judges decide.

An earlier legal challenge by Schrems — which was also related to the clash between US mass surveillance programs (which harvest data from social media services) and EU fundamental rights (which mandate that web users’ privacy is protected) — resulted in the previous arrangement for transatlantic data flows being struck down by the CJEU in 2015, after standing for around 15 years.

Hence the current case being referred to by privacy watchers as ‘Schrems II’. You can also see why Facebook is keen to delay another CJEU referral if it can.

According to comments made by Schrems on Twitter the Irish High Court reserved judgement on Facebook’s request today, with a decision expected within a week…

Facebook’s appeal is based on trying to argue against Irish case law — which Schrems says does not allow for an appeal against such a referral, hence he’s couching it as another delaying tactic by the company:

We reached out to Facebook for comment on the case. At the time of writing it had not responded.

In a statement from October, after an earlier High Court decision on the case, Facebook said:

Standard Contract Clauses provide critical safeguards to ensure that Europeans’ data is protected once transferred to companies that operate in the US or elsewhere around the globe, and are used by thousands of companies to do business. They are essential to companies of all sizes, and upholding them is critical to ensuring the economy can continue to grow without disruption.

This ruling will have no immediate impact on the people or businesses who use our services. However it is essential that the CJEU now considers the extensive evidence demonstrating the robust protections in place under Standard Contractual Clauses and US law, before it makes any decision that may endanger the transfer of data across the Atlantic and around the globe.



https://ift.tt/2r8Wqag

The new TCL Roku 4K HDR TVs for 2018 just launched at $650

The new TCL Roku 4KTVs for 2018 are the poised to become our favorite affordable televisions this year thanks to their high-end features and desirable starting price.

TCL's all-new 6-series costs $649 on Amazon for the 55-inch version, and $999 for the 65-inch model, and each packs in advancements over last year's TCL P6-Series.

Notably, these new LED TVs feature more local area dimming technology – or what TCL calls 'Contrast Control Zones' – for superior contrast ratio. It's how LED manufacturers are trying to compete with more contrast-rich OLED technology.

The P6-Series had local dimming, too, but only 72 of these contrast control zones. This year's televisions bump that up to 96 zones for the 55-inch TV, and 120 zones for the 65-inch model. So, on the larger TV, in addition to the 10 inches of extra screen space, you get 24 more local dimming zones for your extra $350.

All of this is in addition to returning TV features, including Dolby Vision HDR (carried over from 2017), the 4K resolution (carried over from the 2016 set), and the easy-to-use Roku interface (from the 2014 original TCL Roku TV).

Going through that TV timeline, we can see that TCL's televisions have evolved, with the 2018 sets sporting a more aesthetically pleasing brushed metal finish. They also contain more under-the-hood color precision smarts, like a wider color gamut, the DCI-P3 reference color standard, and the all-new iPQ Engine to intelligently pinpoint the right colors.

https://ift.tt/2KmVIPA

Twitter also sold data access to Cambridge Analytica researcher

Since it was revealed that Cambridge Analytica improperly accessed the personal data of millions of Facebook users, one question has lingered in the minds of the public: What other data did Dr. Aleksandr Kogan gain access to?

Twitter confirmed to The Telegraph on Saturday that GSR, Kogan’s own commercial enterprise, had purchased one-time API access to a random sample of public tweets from a five-month period between December 2014 and April 2015. Twitter told Bloomberg that, following an internal review, the company did not find any access to private data about people who use Twitter.

Twitter sells API access to large organizations or enterprises for the purposes of surveying sentiment or opinion during various events, or around certain topics or ideas.

Here’s what a Twitter spokesperson said to The Telegraph:

Twitter has also made the policy decision to off-board advertising from all accounts owned and operated by Cambridge Analytica. This decision is based on our determination that Cambridge Analytica operates using a business model that inherently conflicts with acceptable Twitter Ads business practices. Cambridge Analytica may remain an organic user on our platform, in accordance with the Twitter Rules.

Obviously, this doesn’t have the same scope as the data harvested about users on Facebook. Twitter’s data on users is far less personal. Location on the platform is opt-in and generic at that, and users are not forced to use their real name on the platform.

Still, it shows just how broad the Cambridge Analytica data collection was ahead of the 2016 election.

We reached out to Twitter and will update when we hear back.



https://ift.tt/2HE9dJ2

NHS switches to Windows 10 to beat the next WannaCry-style cyber-attack

The government has announced that the NHS will be upgrading its PCs to Windows 10, and that £150 million will be spent in the next three years to beef up NHS cybersecurity in general.

In a press statement, the Department of Health and Social Care stressed that all health and care organisations will be using Windows 10 with ‘up-to-date security settings’ to better defend against major cyber-attacks like WannaCry, which hit the NHS hard a year ago.

As well as the change in operating system, the plan is to set up a new digital security operations centre to respond to security incidents more swiftly, and allow threats to be detected and isolated before they spread.

Trusty toolkit

Other measures include a data security toolkit requiring health and care organisations to meet 10 security standards, plus £21 million spent on upgrading firewalls and network infrastructure at major trauma centre hospitals and ambulance trusts, alongside £39 million to be funnelled to NHS trusts to help address security weaknesses in infrastructure.

A new text messaging alert system will also be set up that will allow trusts to maintain access to accurate information should internet services go down.

Jeremy Hunt, Health and Social Care Secretary, commented: “We know cyber-attacks are a growing threat, so it is vital our health and care organisations have secure systems which patients trust.

“We have been building the capability of NHS systems over a number of years, but there is always more to do to future-proof our NHS against this threat. This new technology will ensure the NHS can use the latest and most resilient software available – something the public rightly expect.”

https://ift.tt/2r9PhYk

Europe eyeing bot IDs, ad transparency and blockchain to fight fakes

European Union lawmakers want online platforms to come up with their own systems to identify bot accounts.

This is as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply — by this summer — as part of a wider package of proposals it’s put out which are generally aimed at tackling the problematic spread and impact of disinformation online.

The proposals follow an EC-commissioned report last month, by its High-Level Expert Group, which recommended more transparency from online platforms to help combat the spread of false information online — and also called for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.

Bots, fake accounts, political ads, filter bubbles

In an announcement on Friday the Commission said it wants platforms to establish “clear marking systems and rules for bots” in order to ensure “their activities cannot be confused with human interactions”. It does not go into a greater level of detail on how that might be achieved. Clearly it’s intending platforms to have to come up with relevant methodologies.

Identifying bots is not an exact science — as academics conducting research into how information spreads online could tell you. The current tools that exist for trying to spot bots typically involve rating accounts across a range of criteria to give a score of how likely an account is to be algorithmically controlled vs human controlled. But platforms do at least have a perfect view into their own systems, whereas academics have had to rely on the variable level of access platforms are willing to give them.

Another factor here is that given the sophisticated nature of some online disinformation campaigns — the state-sponsored and heavily resourced efforts by Kremlin backed entities such as Russia’s Internet Research Agency, for example — if the focus ends up being algorithmically controlled bots vs IDing bots that might have human agents helping or controlling them, plenty of more insidious disinformation agents could easily slip through the cracks.

That said, other measures in the EC’s proposals for platforms include stepping up their existing efforts to shutter fake accounts and being able to demonstrate the “effectiveness” of such efforts — so greater transparency around how fake accounts are identified and the proportion being removed (which could help surface more sophisticated human-controlled bot activity on platforms too).

Another measure from the package: The EC says it wants to see “significantly” improved scrutiny of ad placements — with a focus on trying to reduce revenue opportunities for disinformation purveyors.

Restricting targeting options for political advertising is another component. “Ensure transparency about sponsored content relating to electoral and policy-making processes,” is one of the listed objectives on its fact sheet — and ad transparency is something Facebook has said it’s prioritizing since revelations about the extent of Kremlin disinformation on its platform during the 2016 US presidential election, with expanded tools due this summer.

The Commission also says generally that it wants platforms to provide “greater clarity about the functioning of algorithms” and enable third-party verification — though there’s no greater level of detail being provided at this point to indicate how much algorithmic accountability it’s after from platforms.

We’ve asked for more on its thinking here and will update this story with any response. It looks to be seeking to test the water to see how much of the workings of platforms’ algorithmic blackboxes can be coaxed from them voluntarily — such as via measures targeting bots and fake accounts — in an attempt to stave off formal and more fulsome regulations down the line.

Filter bubbles also appear to be informing the Commission’s thinking, as it says it wants platforms to make it easier for users to “discover and access different news sources representing alternative viewpoints” — via tools that let users customize and interact with the online experience to “facilitate content discovery and access to different news sources”.

Though another stated objective is for platforms to “improve access to trustworthy information” — so there are questions about how those two aims can be balanced, i.e. without efforts towards one undermining the other. 

On trustworthiness, the EC says it wants platforms to help users assess whether content is reliable using “indicators of the trustworthiness of content sources”, as well as by providing “easily accessible tools to report disinformation”.

In one of several steps Facebook has taken since 2016 to try to tackle the problem of fake content being spread on its platform the company experimented with putting ‘disputed’ labels or red flags on potentially untrustworthy information. However the company discontinued this in December after research suggested negative labels could entrench deeply held beliefs, rather than helping to debunk fake stories.

Instead it started showing related stories — containing content it had verified as coming from news outlets its network of fact checkers considered reputable — as an alternative way to debunk potential fakes.

The Commission’s approach looks to be aligning with Facebook’s rethought approach — with the subjective question of how to make judgements on what is (and therefore what isn’t) a trustworthy source likely being handed off to third parties, given that another strand of the code is focused on “enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation”.

Since 2016 Facebook has been leaning heavily on a network of local third party ‘partner’ fact-checkers to help identify and mitigate the spread of fakes in different markets — including checkers for written content and also photos and videos, the latter in an effort to combat fake memes before they have a chance to go viral and skew perceptions.

In parallel Google has also been working with external fact checkers, such as on initiatives such as highlighting fact-checked articles in Google News and search. 

The Commission clearly approves of the companies reaching out to a wider network of third party experts. But it is also encouraging work on innovative tech-powered fixes to the complex problem of disinformation — describing AI (“subject to appropriate human oversight”) as set to play a “crucial” role for “verifying, identifying and tagging disinformation”, and pointing to blockchain as having promise for content validation.

Specifically it reckons blockchain technology could play a role by, for instance, being combined with the use of “trustworthy electronic identification, authentication and verified pseudonyms” to preserve the integrity of content and validate “information and/or its sources, enable transparency and traceability, and promote trust in news displayed on the Internet”.

It’s one of a handful of nascent technologies the executive flags as potentially useful for fighting fake news, and whose development it says it intends to support via an existing EU research funding vehicle: The Horizon 2020 Work Program.

It says it will use this program to support research activities on “tools and technologies such as artificial intelligence and blockchain that can contribute to a better online space, increasing cybersecurity and trust in online services”.

It also flags “cognitive algorithms that handle contextually-relevant information, including the accuracy and the quality of data sources” as a promising tech to “improve the relevance and reliability of search results”.

The Commission is giving platforms until July to develop and apply the Code of Practice — and is using the possibility that it could still draw up new laws if it feels the voluntary measures fail as a mechanism to encourage companies to put the sweat in.

It is also proposing a range of other measures to tackle the online disinformation issue — including:

  • An independent European network of fact-checkers: The Commission says this will establish “common working methods, exchange best practices, and work to achieve the broadest possible coverage of factual corrections across the EU”; and says they will be selected from the EU members of the International Fact Checking Network which it notes follows “a strict International Fact Checking NetworkCode of Principles”
  • A secure European online platform on disinformation to support the network of fact-checkers and relevant academic researchers with “cross-border data collection and analysis”, as well as benefitting from access to EU-wide data
  • Enhancing media literacy: On this it says a higher level of media literacy will “help Europeans to identify online disinformation and approach online content with a critical eye”. So it says it will encourage fact-checkers and civil society organisations to provide educational material to schools and educators, and organise a European Week of Media Literacy
  • Support for Member States in ensuring the resilience of elections against what it dubs “increasingly complex cyber threats” including online disinformation and cyber attacks. Stated measures here include encouraging national authorities to identify best practices for the identification, mitigation and management of risks in time for the 2019 European Parliament elections. It also notes work by a Cooperation Group, saying “Member States have started to map existing European initiatives on cybersecurity of network and information systems used for electoral processes, with the aim of developing voluntary guidance” by the end of the year.  It also says it will also organise a high-level conference with Member States on cyber-enabled threats to elections in late 2018
  • Promotion of voluntary online identification systems with the stated aim of improving the “traceability and identification of suppliers of information” and promoting “more trust and reliability in online interactions and in information and its sources”. This includes support for related research activities in technologies such as blockchain, as noted above. The Commission also says it will “explore the feasibility of setting up voluntary systems to allow greater accountability based on electronic identification and authentication scheme” — as a measure to tackle fake accounts. “Together with others actions aimed at improving traceability online (improving the functioning, availability and accuracy of information on IP and domain names in the WHOIS system and promoting the uptake of the IPv6 protocol), this would also contribute to limiting cyberattacks,” it adds
  • Support for quality and diversified information: The Commission is calling on Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment. The Commission says it will launch a call for proposals in 2018 for “the production and dissemination of quality news content on EU affairs through data-driven news media”

It says it will aim to co-ordinate its strategic comms policy to try to counter “false narratives about Europe” — which makes you wonder whether debunking the output of certain UK tabloid newspapers might fall under that new EC strategy — and also more broadly to tackle disinformation “within and outside the EU”.

Commenting on the proposals in a statement, the Commission’s VP for the Digital Single Market, Andrus Ansip, said: Disinformation is not new as an instrument of political influence. New technologies, especially digital, have expanded its reach via the online environment to undermine our democracy and society. Since online trust is easy to break but difficult to rebuild, industry needs to work together with us on this issue. Online platforms have an important role to play in fighting disinformation campaigns organised by individuals and countries who aim to threaten our democracy.”

The EC’s next steps now will be bringing the relevant parties together — including platforms, the ad industry and “major advertisers” — in a forum to work on greasing cooperation and getting them to apply themselves to what are still, at this stage, voluntary measures.

“The forum’s first output should be an EU–wide Code of Practice on Disinformation to be published by July 2018, with a view to having a measurable impact by October 2018,” says the Commission. 

The first progress report will be published in December 2018. “The report will also examine the need for further action to ensure the continuous monitoring and evaluation of the outlined actions,” it warns.

And if self-regulation fails…

In a fact sheet further fleshing out its plans, the Commission states: “Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms.”

And for “a few” read: Mainstream social platforms — so likely the big tech players in the social digital arena: Facebook, Google, Twitter.

For potential regulatory actions tech giants only need look to Germany, where a 2017 social media hate speech law has introduced fines of up to €50M for platforms that fail to comply with valid takedown requests within 24 hours for simple cases, for an example of the kind of scary EU-wide law that could come rushing down the pipe at them if the Commission and EU states decide its necessary to legislate.

Though justice and consumer affairs commissioner, Vera Jourova, signaled in January that her preference on hate speech at least was to continue pursuing the voluntary approach — though she also said some Member State’s ministers are open to a new EU-level law should the voluntary approach fail.

In Germany the so-called NetzDG law has faced criticism for pushing platforms towards risk aversion-based censorship of online content. And the Commission is clearly keen to avoid such charges being leveled at its proposals, stressing that if regulation were to be deemed necessary “such [regulatory] actions should in any case strictly respect freedom of expression”.

Commenting on the Code of Practice proposals, a Facebook spokesperson told us: “People want accurate information on Facebook – and that’s what we want too. We have invested in heavily in fighting false news on Facebook by disrupting the economic incentives for the spread of false news, building new products and working with third-party fact checkers.”

A Twitter spokesman declined to comment on the Commission’s proposals but flagged contributions he said the company is already making to support media literacy — including an event last week at its EMEA HQ.

At the time of writing Google had not responded to a request for comment.

Last month the Commission did further tighten the screw on platforms over terrorist content specifically —  saying it wants them to get this taken down within an hour of a report as a general rule. Though it still hasn’t taken the step to cement that hour ‘rule’ into legislation, also preferring to see how much action it can voluntarily squeeze out of platforms via a self-regulation route.

 



https://ift.tt/2vYxesQ

Europe eyeing bot IDs, ad transparency and blockchain to fight fakes

European Union lawmakers want online platforms to come up with their own systems to identify bot accounts.

This is as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply — by this summer — as part of a wider package of proposals it’s put out which are generally aimed at tackling the problematic spread and impact of disinformation online.

The proposals follow an EC-commissioned report last month, by its High-Level Expert Group, which recommended more transparency from online platforms to help combat the spread of false information online — and also called for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.

Bots, fake accounts, political ads, filter bubbles

In an announcement on Friday the Commission said it wants platforms to establish “clear marking systems and rules for bots” in order to ensure “their activities cannot be confused with human interactions”. It does not go into a greater level of detail on how that might be achieved. Clearly it’s intending platforms to have to come up with relevant methodologies.

Identifying bots is not an exact science — as academics conducting research into how information spreads online could tell you. The current tools that exist for trying to spot bots typically involve rating accounts across a range of criteria to give a score of how likely an account is to be algorithmically controlled vs human controlled. But platforms do at least have a perfect view into their own systems, whereas academics have had to rely on the variable level of access platforms are willing to give them.

Another factor here is that given the sophisticated nature of some online disinformation campaigns — the state-sponsored and heavily resourced efforts by Kremlin backed entities such as Russia’s Internet Research Agency, for example — if the focus ends up being algorithmically controlled bots vs IDing bots that might have human agents helping or controlling them, plenty of more insidious disinformation agents could easily slip through the cracks.

That said, other measures in the EC’s proposals for platforms include stepping up their existing efforts to shutter fake accounts and being able to demonstrate the “effectiveness” of such efforts — so greater transparency around how fake accounts are identified and the proportion being removed (which could help surface more sophisticated human-controlled bot activity on platforms too).

Another measure from the package: The EC says it wants to see “significantly” improved scrutiny of ad placements — with a focus on trying to reduce revenue opportunities for disinformation purveyors.

Restricting targeting options for political advertising is another component. “Ensure transparency about sponsored content relating to electoral and policy-making processes,” is one of the listed objectives on its fact sheet — and ad transparency is something Facebook has said it’s prioritizing since revelations about the extent of Kremlin disinformation on its platform during the 2016 US presidential election, with expanded tools due this summer.

The Commission also says generally that it wants platforms to provide “greater clarity about the functioning of algorithms” and enable third-party verification — though there’s no greater level of detail being provided at this point to indicate how much algorithmic accountability it’s after from platforms.

We’ve asked for more on its thinking here and will update this story with any response. It looks to be seeking to test the water to see how much of the workings of platforms’ algorithmic blackboxes can be coaxed from them voluntarily — such as via measures targeting bots and fake accounts — in an attempt to stave off formal and more fulsome regulations down the line.

Filter bubbles also appear to be informing the Commission’s thinking, as it says it wants platforms to make it easier for users to “discover and access different news sources representing alternative viewpoints” — via tools that let users customize and interact with the online experience to “facilitate content discovery and access to different news sources”.

Though another stated objective is for platforms to “improve access to trustworthy information” — so there are questions about how those two aims can be balanced, i.e. without efforts towards one undermining the other. 

On trustworthiness, the EC says it wants platforms to help users assess whether content is reliable using “indicators of the trustworthiness of content sources”, as well as by providing “easily accessible tools to report disinformation”.

In one of several steps Facebook has taken since 2016 to try to tackle the problem of fake content being spread on its platform the company experimented with putting ‘disputed’ labels or red flags on potentially untrustworthy information. However the company discontinued this in December after research suggested negative labels could entrench deeply held beliefs, rather than helping to debunk fake stories.

Instead it started showing related stories — containing content it had verified as coming from news outlets its network of fact checkers considered reputable — as an alternative way to debunk potential fakes.

The Commission’s approach looks to be aligning with Facebook’s rethought approach — with the subjective question of how to make judgements on what is (and therefore what isn’t) a trustworthy source likely being handed off to third parties, given that another strand of the code is focused on “enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation”.

Since 2016 Facebook has been leaning heavily on a network of local third party ‘partner’ fact-checkers to help identify and mitigate the spread of fakes in different markets — including checkers for written content and also photos and videos, the latter in an effort to combat fake memes before they have a chance to go viral and skew perceptions.

In parallel Google has also been working with external fact checkers, such as on initiatives such as highlighting fact-checked articles in Google News and search. 

The Commission clearly approves of the companies reaching out to a wider network of third party experts. But it is also encouraging work on innovative tech-powered fixes to the complex problem of disinformation — describing AI (“subject to appropriate human oversight”) as set to play a “crucial” role for “verifying, identifying and tagging disinformation”, and pointing to blockchain as having promise for content validation.

Specifically it reckons blockchain technology could play a role by, for instance, being combined with the use of “trustworthy electronic identification, authentication and verified pseudonyms” to preserve the integrity of content and validate “information and/or its sources, enable transparency and traceability, and promote trust in news displayed on the Internet”.

It’s one of a handful of nascent technologies the executive flags as potentially useful for fighting fake news, and whose development it says it intends to support via an existing EU research funding vehicle: The Horizon 2020 Work Program.

It says it will use this program to support research activities on “tools and technologies such as artificial intelligence and blockchain that can contribute to a better online space, increasing cybersecurity and trust in online services”.

It also flags “cognitive algorithms that handle contextually-relevant information, including the accuracy and the quality of data sources” as a promising tech to “improve the relevance and reliability of search results”.

The Commission is giving platforms until July to develop and apply the Code of Practice — and is using the possibility that it could still draw up new laws if it feels the voluntary measures fail as a mechanism to encourage companies to put the sweat in.

It is also proposing a range of other measures to tackle the online disinformation issue — including:

  • An independent European network of fact-checkers: The Commission says this will establish “common working methods, exchange best practices, and work to achieve the broadest possible coverage of factual corrections across the EU”; and says they will be selected from the EU members of the International Fact Checking Network which it notes follows “a strict International Fact Checking NetworkCode of Principles”
  • A secure European online platform on disinformation to support the network of fact-checkers and relevant academic researchers with “cross-border data collection and analysis”, as well as benefitting from access to EU-wide data
  • Enhancing media literacy: On this it says a higher level of media literacy will “help Europeans to identify online disinformation and approach online content with a critical eye”. So it says it will encourage fact-checkers and civil society organisations to provide educational material to schools and educators, and organise a European Week of Media Literacy
  • Support for Member States in ensuring the resilience of elections against what it dubs “increasingly complex cyber threats” including online disinformation and cyber attacks. Stated measures here include encouraging national authorities to identify best practices for the identification, mitigation and management of risks in time for the 2019 European Parliament elections. It also notes work by a Cooperation Group, saying “Member States have started to map existing European initiatives on cybersecurity of network and information systems used for electoral processes, with the aim of developing voluntary guidance” by the end of the year.  It also says it will also organise a high-level conference with Member States on cyber-enabled threats to elections in late 2018
  • Promotion of voluntary online identification systems with the stated aim of improving the “traceability and identification of suppliers of information” and promoting “more trust and reliability in online interactions and in information and its sources”. This includes support for related research activities in technologies such as blockchain, as noted above. The Commission also says it will “explore the feasibility of setting up voluntary systems to allow greater accountability based on electronic identification and authentication scheme” — as a measure to tackle fake accounts. “Together with others actions aimed at improving traceability online (improving the functioning, availability and accuracy of information on IP and domain names in the WHOIS system and promoting the uptake of the IPv6 protocol), this would also contribute to limiting cyberattacks,” it adds
  • Support for quality and diversified information: The Commission is calling on Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment. The Commission says it will launch a call for proposals in 2018 for “the production and dissemination of quality news content on EU affairs through data-driven news media”

It says it will aim to co-ordinate its strategic comms policy to try to counter “false narratives about Europe” — which makes you wonder whether debunking the output of certain UK tabloid newspapers might fall under that new EC strategy — and also more broadly to tackle disinformation “within and outside the EU”.

Commenting on the proposals in a statement, the Commission’s VP for the Digital Single Market, Andrus Ansip, said: Disinformation is not new as an instrument of political influence. New technologies, especially digital, have expanded its reach via the online environment to undermine our democracy and society. Since online trust is easy to break but difficult to rebuild, industry needs to work together with us on this issue. Online platforms have an important role to play in fighting disinformation campaigns organised by individuals and countries who aim to threaten our democracy.”

The EC’s next steps now will be bringing the relevant parties together — including platforms, the ad industry and “major advertisers” — in a forum to work on greasing cooperation and getting them to apply themselves to what are still, at this stage, voluntary measures.

“The forum’s first output should be an EU–wide Code of Practice on Disinformation to be published by July 2018, with a view to having a measurable impact by October 2018,” says the Commission. 

The first progress report will be published in December 2018. “The report will also examine the need for further action to ensure the continuous monitoring and evaluation of the outlined actions,” it warns.

And if self-regulation fails…

In a fact sheet further fleshing out its plans, the Commission states: “Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms.”

And for “a few” read: Mainstream social platforms — so likely the big tech players in the social digital arena: Facebook, Google, Twitter.

For potential regulatory actions tech giants only need look to Germany, where a 2017 social media hate speech law has introduced fines of up to €50M for platforms that fail to comply with valid takedown requests within 24 hours for simple cases, for an example of the kind of scary EU-wide law that could come rushing down the pipe at them if the Commission and EU states decide its necessary to legislate.

Though justice and consumer affairs commissioner, Vera Jourova, signaled in January that her preference on hate speech at least was to continue pursuing the voluntary approach — though she also said some Member State’s ministers are open to a new EU-level law should the voluntary approach fail.

In Germany the so-called NetzDG law has faced criticism for pushing platforms towards risk aversion-based censorship of online content. And the Commission is clearly keen to avoid such charges being leveled at its proposals, stressing that if regulation were to be deemed necessary “such [regulatory] actions should in any case strictly respect freedom of expression”.

Commenting on the Code of Practice proposals, a Facebook spokesperson told us: “People want accurate information on Facebook – and that’s what we want too. We have invested in heavily in fighting false news on Facebook by disrupting the economic incentives for the spread of false news, building new products and working with third-party fact checkers.”

A Twitter spokesman declined to comment on the Commission’s proposals but flagged contributions he said the company is already making to support media literacy — including an event last week at its EMEA HQ.

At the time of writing Google had not responded to a request for comment.

Last month the Commission did further tighten the screw on platforms over terrorist content specifically —  saying it wants them to get this taken down within an hour of a report as a general rule. Though it still hasn’t taken the step to cement that hour ‘rule’ into legislation, also preferring to see how much action it can voluntarily squeeze out of platforms via a self-regulation route.

 

https://ift.tt/2vYxesQ

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon

Walmart’s strategy to get itself fighting fit against Amazon saw one more development today.

This morning, UK supermarket chain Sainsbury’s announced a deal with Walmart to buy a majority stake in Asda, Walmart’s wholly-owned UK subsidiary. The deal values Asda at £7.3 billion, and (if it closes) will net Walmart £2.975 billion in cash, a 42 percent share of the combined business as a “long-term shareholder”, and 29.9 percent voting rights in the combined entity, which will include 2,800 Sainsbury’s, Asda and Argos stores and 330,000 employees in the country.

The news underscores how Walmart, off the back of a challenging quarter of e-commerce sales in the crucial holiday period (news that shook investors enough to send Walmart’s sock tumbling), is still trying to figure out the right mix of its business to fight off not just current retail competition, but also whatever form its competition might take in the future. At the moment, the one big common rival in both of those scenarios is Amazon.

In the US, Walmart has been trying out multiple routes for consumers to shop in new ways that address the kinds of options that the likes of Amazon now offers them. Targeting different geographies and demographics, Walmart has made big bets like its $3 billion acquisition of Jet.com; expanding its own new delivery services, and payment and return methods; as well as running pilots with various third parties like Postmates and DoorDash.

Internationally, it’s a different story. Walmart has a significantly reduced presence — its international business in aggregate is around one-third the size of its US business, $118 million in FY2017 versus $318 million. And with no clearly dominant position in any of its international markets, this has led the company to consider a variety of other options to figure out the best way forward.

“This proposed merger represents a unique and bold opportunity, consistent with our strategy of looking for new ways to drive international growth,” said Judith McKenna, president and CEO of Walmart International, in a statement. “Asda became part of Walmart nearly 20 years ago, and it is a great business and an important part of our portfolio, acting as a source of best practices, new ideas and talent for Walmart businesses around the world. We believe this combination will create a dynamic new retail player better positioned for even more success in a fast-changing and competitive UK market. It will unlock value for both customers and shareholders, but, at the same time, it’s the colleagues at Asda who make the difference, and this merger will provide them with broader opportunities within the retail group.  We are very much looking forward to working closely with Sainsbury’s to deliver the benefits of the combined business.”

The UK market is a prime example of the kind of scenario that hasn’t been working as well for Walmart as it could, and I think that the decision for Walmart to move back from its UK business has a strong link to the Amazon effect on the market.

In the UK, Asda is number-three in supermarket share, with a 15.6 percent stake, after leader Tesco and Sainsbury’s. All three of the leaders focus on traditional supermarket formats, and their modern-day UK twists. This translates to huge stores with multiple selections for each product ranging from bargain tiers to more expensive, premium varieties; sizeable chains of smaller convenience store-style locations; and online delivery of varying popularity.

The three tiers of operations may sound like diversification, but it’s actually very undiversified within its category, making for extreme price competition on products themselves (and that happens both before and after you buy: another smaller competitor, the online grocery delivery Ocado, regularly refunds me money, unprompted, on products it says are sold for less at competing stores).

On top of that, the big three have all been cannibalised in recent times — in part because of the insurgence of smaller, discount stores like Aldi and Lidl that forego brand names in favor of a smaller selection of often their own brands at a cheaper price (a little like Trader Joe’s, which is owned by Aldi, but often much less expensive); and in part because of a big shift to shopping online, an area where Amazon is hoping to only get bigger and is investing a lot. In addition to Amazon’s Whole Foods acquisition, in the UK specifically, this has included rumors that it’s eyed up the online-only shopping service Ocado, and it partners with another UK supermarket chain, Morrisons.

The fact that Amazon is now also branching into physical locations on the back of its strong online sales and corresponding logistics record is a major threat to Walmart and others that have built physical businesses first, and I think that Walmart has assessed all of the above and decided to throw in the towel on trying to tackle it on its own.

Notably, while Walmart on its own has been unable to reach a number-one position in the UK market, combined with Sainsbury’s (and as a minority partner) it will. Asda and Sainsbury’s would have a market share of over 31 percent (Sainsbury’s today has 15.8 percent; Asda 15.6 percent), putting it ahead of current leader Tesco (27.6 percent). That also means that the deal will face regulatory scrutiny, and might get suppered, or come with sell-off caveats, to go ahead.

The news about Asda in the UK comes amid a series of other chops and changes in Walmart’s business outside of its core US market.

In India, Walmart is inching closer to a deal to acquire a majority stake in online retailer Flipkart, the largest online retailer in the country that itself is feeling a lot of heat from Amazon.

Walmart’s $10 billion – $12 billion deal for Flipkart, which is now expected to be close at the end of June, would give the company a 51 percent stake of Flipkart, valuing the Indian online giant at about $18 billion. Amazon has made India — a fast-growing economy with strong consumer trends embracing digital commerce — a large priority in its international strategy, with plans to invest some xx billion into its efforts in the country.

Looking ahead, Walmart is also rumored to be looking at stepping away from Brazil.

It’s a long-term plan for the company. Two years ago, Walmart placed its e-commerce efforts in China into a venture with Alibaba’s JD.com as a partial retreat from that market.

After that Walmart seemed to put its efforts there on hold — its local Chinese corporate site ceasing to update after 2016 but not disappearing altogether. But more recently, just last month in fact, in a signal of how it hopes to continue to combine physical and digital retail — or online-to-offline, as its often called — Walmart opened a pared-down “high tech” supermarket. Here people can shop for a select number of food and other items, as well as browse for these and many more to buy online on JD Daojia (the JD venture) while in-store, and have them delivered.

The latest store in China, and Walmart’s approach there, could be an interesting template for what we might expect in the UK if its sale gets the green light from regulators. Sainsbury’s also owns Argos, a retailer that has essentially been built on the catalog and online sales model: there is no large-presence retail floor, and instead, people order items — either at a counter in the store itself, or online — and either have them delivered or pick them up at another counter in the shop itself. Could we see a scenario of similar “high-tech” supermarkets open in the UK, where the Asda brand is used in a similar turn with subsequently greatly reduced retail footprints?

 

https://ift.tt/2Ks7h82

BT just knocked £4 per month off Infinity fibre broadband deals for new customers

We'll tell it to you straight...BT can't match the best broadband prices on the market. You can get ADSL internet at the moment for as little as £16.80 per month, and nobody touches Vodafone's £21 per month fibre broadband deal.

But people still flock to BT broadband in their droves. For one, it's still the most prominent name of all the internet providers in the UK. Plus, some of its perks are fantastic. And now it has dropped the price of its so-called Infinity fibre broadband plan by £4 per month - that's a £72 saving over the course of the 18-month contract.

So if you've been eyeing the upgrade to fibre and want to go with the UK's most popular broadband company, now is a good time to strike. Read on for more details about the plan and the added extras you'll get at the moment as well.

BT Infinity fibre broadband deal:

What is a BT Reward Card?

The pre-paid Mastercard - BT calls it a Reward Card - is effectively a credit card that you can use anywhere that accepts Mastercard. In short, that's around a million shops, cafes and restaurants around the world, so you shouldn't find it difficult to find places to spend, spend, spend.

It's an old-fashioned chip and pin card, rather than contactless, so slightly less convenient but much more secure. But do make sure that you claim your Reward Card within three months of installation, otherwise you'll lose out on all that cash.

Best broadband deals

If you're still um-ing and ah-ing over whether to go for one of these BT broadband offers, or if you want to see what other TV or phone options there are, then check out our BT broadband deals page - our bespoke price comparison table will help you choose, with packages that include unlimited calls and cheap BT Sports subscriptions. And if you want still more internet alternatives, then head on over to our main broadband deals comparison page.

https://ift.tt/2KqMzFF

Xiaomi increases the prices of Redmi Note 5 Pro, Mi TV 4

Xiaomi has announced a hike in the prices of its popular products, the Redmi Note 5 Pro and the Mi TV 4 to keep up with the increased costs and the huge demand. Now, the Redmi Note 5 Pro is costlier by Rs 1,000 while the Mi TV 4 55-inch is dearer by Rs 5,000.

According to Xiaomi, the increase in prices is due to multiple factors ranging from huge demand to a new tax structure that imposes a 10 percent tax on PCBA imports. Xiaomi says that it can ramp up local PCBA production by 100 percent only by Q3 2018, and as such, it has to import a “significant number” of PCBAs.

Further, the depreciation of the INR by nearly 5 percent since the beginning of this year has also affected Xiaomi’s costs. These three factors together would have increased Xiaomi’s working costs and to make up for them, the company has announced a hike in prices of two of its popular products.

New prices applicable from May 1st

Xiaomi has announced that the new increased prices will be applicable from May 1st on Mi.com, Mi Home stores and Flipkart.

After the price hike, the Redmi Note 5 Pro 4GB variant will cost Rs 14,999, while the 6GB variant will continue to be available at Rs 16,999. The Mi TV 4 55-inch 4K HDR will now cost Rs 44,999, up from its original price of Rs 39,999.

Prices could come down later

While Xiaomi has not commented on whether the prices of the Redmi Note 5 Pro and the Mi TV 4 could come down in the future, the company has revealed that the local production of PCBAs will increase by 100 percent by Q3 2018. This could bring down Xiaomi’s PCBA imports, and there are chances that the company could pass on the benefit of reduced costs to its customers.

https://ift.tt/2KmMfYC

Microsoft releases preview of Office 2019

Microsoft is now rolling out a free preview of Office 2019 for businesses, including new versions of Word, Excel, PowerPoint and OneNote.

Office 365 subscribers already have all the new features – continuous updates are one of the main benefits of the cloud service – but it’s the first perpetual release since 2016.

New features in Office 2019 include the ability to manage icons, vector graphics and 3D images in PowerPoint; funnel charts and 2D maps in Excel; Office 365 groups for Outlook; and a chic black theme for Word.

Cloud first

A preview of Office 2019 for Mac will arrive in the coming months, but Microsoft has already announced that the PC version will be exclusively for Windows 10, pushing businesses still using Windows 7 to upgrade their licenses.

Microsoft is also cagey about the future perpetual version of Office, saying only that: "As standard practice, Microsoft will continue evaluating customer needs and industry trends to determine a plan for future versions of our products and services." By 2022, it might have decided that the future lies wholly in the cloud.

Sadly, the Office 2019 preview isn’t available for home users – only businesses with volume licenses for Office 2016 that are planning to upgrade when the suite launches later this year. Everyone else will have to wait a little longer.

https://ift.tt/2vY5WTq

Best antivirus for Windows 10 in 2018

Netflix’s next big focus: more fantasy and sci-fi movies and shows

Netflix is looking to increase the number of sci-fi shows and movies it has on the service, as its viewers can’t get enough of these genres.

This is according to new research by analytics firm Ampere Analysis which took in viewing data from 66,000 subscribers in 16 different markets and noted that a whopping 29% of upcoming original content from Netflix will be either fantasy or sci-fi. 

This is in the hope that it can capitalize on an audience that invested in shows such as Stranger Things and movies like Annihilation. 

Fantasy focus

Sci-fi and fantasy weren’t always Netflix’s most popular genres. Just last year, it was comedy that dominated but this has been overtaken thanks to the likes of Altered Carbon, The OA and the recent Lost In Space reboot. 

It’s not just Netflix seeing success in sci-fi and fantasy, either. Game of Thrones and WestWorld have been massive hits for HBO and Amazon is betting big on the Lord Of The Rings franchise to bring it a new legion of fantasy fans. Apple, another player in what is becoming a crowded market, announced recently that Amazing Stories will be part of its upcoming slate of original shows.

Whether Netflix is creating original sci-fi content or partnering for things like Star-Trek: Discovery and Cloverfield Paradox, it will continue doing so until its viewers’ tastes change as Ampere Analysis notes: "Netflix uses sophisticated customer analytics to rapidly respond to changes in subscriber taste, so as demand for Sci-Fi and Fantasy grows, so does the amount of commissioned content."

Via Business Insider

https://ift.tt/2ra5VXD

May Bank holiday sales 2018: Looking ahead for the best deals this weekend

Saturday 28 April 2018

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy-hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’; thanks, in no small part, to the existential threat posed by dark ads which, up until very recently, Facebook wasn’t saying anything about at all. Except to claim it was “crazy” to think they might have any influence.

And now, despite major scandals and political pressure, Facebook is still showing zero appetite to “fix” its platform — because the issues being thrown into sharp relief are actually there by design; this is how Facebook’s business functions.

“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory,” wrote Zuckerberg in January, underlining how much easier it is to break stuff than put things back together — or even just make a convincing show of fiddling with sticking plaster.



https://ift.tt/2vUlg3j