Facebook has started a partnership with government of nations that will ensure citizens of countries are reminded of their civic responsibilities on the social media network.
One of the civic responsibilities Facebook would be reminding citizens of, is electoral responsibility that will have to do with reminding users of the social media platform to register to vote.
In April, the Independent National Electoral Commission (INEC) began the Continuous Voter Registration (CVR) exercise, while also distributing uncollected Permanent Voters Card (PVC).
In order to ensure all Nigerians of eligible age register to vote in the 2019 general elections, the electoral commission said the CVR exercise will run into the electoral year 2019.
But the commission has lamented that over 1.4 million PVCs are yet to be collected in Lagos state alone.
With the new Facebook development, electoral commissions of nations need not bother too much about reminding citizens to turn out for voters’ registration exercises.
India is the recent government to adopt this modality, as the Election Commission of India only just announced it will be using the Facebook platform to remind Indians to turn out for voters’ registration.
According to an official release, a notification of the ‘voter registration reminder’ will be sent on July 1 to Indians who are eligible to vote on Facebook.
The reminder will be sent out in 13 Indian languages – English, Hindi, Gujarati, Tamil, Telugu, Malayalam, Kannada, Punjabi, Bengali, Urdu, Assamese, Marathi and Odia.
By clicking on the ‘register now’ button on Facebook, people will be directed to the national voters’ services portal of India which will guide them through the registration process.
“I am pleased to announce that the Election Commission is launching a special drive to enrol left out electors, with a special focus on first time electors.
“This is a step towards fulfilment of the motto of EC ‘No Voter to be Left Behind’,” Chief Election Commissioner, Nasim Zaidi said.
TheNewsGuru reports this is the first time Facebook has been used for enrolling new voters across India.
Facebook on Tuesday has announced a total of two billion people now use the social network to connect, communicate and collaborate.
“As of this morning, the Facebook community is now officially 2 billion people!” Mark Zuckerberg wrote in a post marking the milestone.
The social media giant’s founder Mark Zuckerberg recently highlighted his new mission of not just connecting people but helping them find common ground.
“We’re making progress connecting the world, and now let’s bring the world closer together,” he wrote, adding “It’s an honour to be on this journey with you”.
Facebook’s announcement came as it works to redefine its purpose, led by Zuckerberg who travelled the US this year to better understand what people want out of the social network.
“We realise that we need to do more too,” the 33-year-old said in a recent interview with CNN Tech.
“It’s important to give people a voice, to get a diversity of opinions out there, but on top of that, you also need to do this work of building common ground so that way we can all move forward together,” he added.
The firm’s new mission statement says it seeks “to give people the power to build community”.
Zuckerberg’s message was echoed by Naomi Gleit, a Vice President at the Internet giant, who credited the millions of small communities emerging within Facebook for helping drive growth.
More than a billion people take part each month in Facebook “groups” – built around everything from sporting interests to humanitarian projects, she said in an online post on Tuesday.
As more and more communication takes place in digital form, the full range of public conversations are moving online — in groups and broadcasts, in text and video, even with emoji. These discussions reflect the diversity of human experience: some are enlightening and informative, others are humorous and entertaining, and others still are political or religious. Some can also be hateful and ugly. Most responsible communications platforms and systems are now working hard to restrict this kind of hateful content.
Richard Allan, Facebook VP Public Policy EMEA
Facebook is no exception. We are an open platform for all ideas, a place where we want to encourage self-expression, connection and sharing. At the same time, when people come to Facebook, we always want them to feel welcome and safe. That’s why we have rules against bullying, harassing and threatening someone.
But what happens when someone expresses a hateful idea online without naming a specific person? A post that calls all people of a certain race “violent animals” or describes people of a certain sexual orientation as “disgusting” can feel very personal and, depending on someone’s experiences, could even feel dangerous. In many countries around the world, those kinds of attacks are known as hate speech. We are opposed to hate speech in all its forms, and don’t allow it on our platform.
In this post we want to explain how we define hate speech and approach removing it — as well as some of the complexities that arise when it comes to setting limits on speech at a global scale, in dozens of languages, across many cultures. Our approach, like those of other platforms, has evolved over time and continues to change as we learn from our community, from experts in the field, and as technology provides us new tools to operate more quickly, more accurately and precisely at scale.
Defining Hate Speech
The first challenge in stopping hate speech is defining its boundaries.
People come to Facebook to share their experiences and opinions, and topics like gender, nationality, ethnicity and other personal characteristics are often a part of that discussion. People might disagree about the wisdom of a country’s foreign policy or the morality of certain religious teachings, and we want them to be able to debate those issues on Facebook. But when does something cross the line into hate speech?
Our current definition of hate speech is anything that directly attacks people based on what are known as their “protected characteristics” — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.
There is no universally accepted answer for when something crosses the line. Although a number of countries have laws against hate speech, their definitions of it vary significantly.
In Germany, for example, laws forbid incitement to hatred; you could find yourself the subject of a police raid if you post such content online. In the US, on the other hand, even the most vile kinds of speech are legally protected under the US Constitution.
People who live in the same country — or next door — often have different levels of tolerance for speech about protected characteristics. To some, crude humor about a religious leader can be considered both blasphemy and hate speech against all followers of that faith. To others, a battle of gender-based insults may be a mutually enjoyable way of sharing a laugh. Is it OK for a person to post negative things about people of a certain nationality as long as they share that same nationality? What if a young person who refers to an ethnic group using a racial slur is quoting from lyrics of a song?
There is very important academic work in this area that we follow closely. Timothy Garton Ash, for example, has created the Free Speech Debate to look at these issues on a cross-cultural basis. Susan Benesch established the Dangerous Speech Project, which investigates the connection between speech and violence. These projects show how much work is left to be done in defining the boundaries of speech online, which is why we’ll keep participating in this work to help inform our policies at Facebook.
Enforcement
We’re committed to removing hate speech any time we become aware of it. Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week — that’s around 288,000 posts a month globally. (This includes posts that may have been reported for hate speech but deleted for other reasons, although it doesn’t include posts reported for other reasons but deleted for hate speech.*)
But it’s clear we’re not perfect when it comes to enforcing our policy. Often there are close calls — and too often we get it wrong.
Sometimes, it’s obvious that something is hate speech and should be removed – because it includes the direct incitement of violence against protected characteristics, or degrades or dehumanizes people. If we identify credible threats of imminent violence against anyone, including threats based on a protected characteristic, we also escalate that to local law enforcement.
But sometimes, there isn’t a clear consensus — because the words themselves are ambiguous, the intent behind them is unknown or the context around them is unclear. Language also continues to evolve, and a word that was not a slur yesterday may become one today.
Here are some of the things we take into consideration when deciding what to leave on the site and what to remove.
Context
What does the statement “burn flags not fags” mean? While this is clearly a provocative statement on its face, should it be considered hate speech? For example, is it an attack on gay people, or an attempt to “reclaim” the slur? Is it an incitement of political protest through flag burning? Or, if the speaker or audience is British, is it an effort to discourage people from smoking cigarettes (fag being a common British term for cigarette)? To know whether it’s a hate speech violation, more context is needed.
Often the most difficult edge cases involve language that seems designed to provoke strong feelings, making the discussion even more heated — and a dispassionate look at the context (like country of speaker or audience) more important. Regional and linguistic context is often critical, as is the need to take geopolitical events into account. In Myanmar, for example, the word “kalar” has benign historic roots, and is still used innocuously across many related Burmese words. The term can however also be used as an inflammatory slur, including as an attack by Buddhist nationalists against Muslims. We looked at the way the word’s use was evolving, and decided our policy should be to remove it as hate speech when used to attack a person or group, but not in the other harmless use cases. We’ve had trouble enforcing this policy correctly recently, mainly due to the challenges of understanding the context; after further examination, we’ve been able to get it right. But we expect this to be a long-term challenge.
In Russia and Ukraine, we faced a similar issue around the use of slang words the two groups have long used to describe each other. Ukrainians call Russians “moskal,” literally “Muscovites,” and Russians call Ukrainians “khokhol,” literally “topknot.” After conflict started in the region in 2014, people in both countries started to report the words used by the other side as hate speech. We did an internal review and concluded that they were right. We began taking both terms down, a decision that was initially unpopular on both sides because it seemed restrictive, but in the context of the conflict felt important to us.
Often a policy debate becomes a debate over hate speech, as two sides adopt inflammatory language. This is often the case with the immigration debate, whether it’s about the Rohingya in South East Asia, the refugee influx in Europe or immigration in the US. This presents a unique dilemma: on the one hand, we don’t want to stifle important policy conversations about how countries decide who can and can’t cross their borders. At the same time, we know that the discussion is often hurtful and insulting.
When the influx of migrants arriving in Germany increased in recent years, we received feedback that some posts on Facebook were directly threatening refugees or migrants. We investigated how this material appeared globally and decided to develop new guidelines to remove calls for violence against migrants or dehumanizing references to them — such as comparisons to animals, to filth or to trash. But we have left in place the ability for people to express their views on immigration itself. And we are deeply committed to making sure Facebook remains a place for legitimate debate.
Intent
People’s posts on Facebook exist in the larger context of their social relationships with friends. When a post is flagged for violating our policies on hate speech, we don’t have that context, so we can only judge it based on the specific text or images shared. But the context can indicate a person’s intent, which can come into play when something is reported as hate speech.
There are times someone might share something that would otherwise be considered hate speech but for non-hateful reasons, such as making a self-deprecating joke or quoting lyrics from a song. People often use satire and comedy to make a point about hate speech.
Or they speak out against hatred by condemning someone else’s use of offensive language, which requires repeating the original offense. This is something we allow, even though it might seem questionable since it means some people may encounter material disturbing to them. But it also gives our community the chance to speak out against hateful ideas. We revised our Community Standards to encourage people to make it clear when they’re sharing something to condemn it, but sometimes their intent isn’t clear, and anti-hatred posts get removed in error.
On other occasions, people may reclaim offensive terms that were used to attack them. When someone uses an offensive term in a self-referential way, it can feel very different from when the same term is used to attack them. For example, the use of the word “dyke” may be considered hate speech when directed as an attack on someone on the basis of the fact that they are gay. However, if someone posted a photo of themselves with #dyke, it would be allowed. Another example is the word “faggot.” This word could be considered hate speech when directed at a person, but, in Italy, among other places, “frocio” (“faggot”) is used by LGBT activists to denounce homophobia and reclaim the word. In these cases, removing the content would mean restricting someone’s ability to express themselves on Facebook.
Mistakes
If we fail to remove content that you report because you think it is hate speech, it feels like we’re not living up to the values in our Community Standards. When we remove something you posted and believe is a reasonable political view, it can feel like censorship. We know how strongly people feel when we make such mistakes, and we’re constantly working to improve our processes and explain things more fully.
Our mistakes have caused a great deal of concern in a number of communities, including among groups who feel we act — or fail to act — out of bias. We are deeply committed to addressing and confronting bias anywhere it may exist. At the same time, we work to fix our mistakes quickly when they happen.
Last year, Shaun King, a prominent African-American activist, posted hate mail he had received that included vulgar slurs. We took down Mr. King’s post in error — not recognizing at first that it was shared to condemn the attack. When we were alerted to the mistake, we restored the post and apologized. Still, we know that these kinds of mistakes are deeply upsetting for the people involved and cut against the grain of everything we are trying to achieve at Facebook.
Continuing To Improve
People often ask: can’t artificial intelligence solve this? Technology will continue to be an important part of how we try to improve. We are, for example, experimenting with ways to filter the most obviously toxic language in comments so they are hidden from posts. But while we’re continuing to invest in these promising advances, we’re a long way from being able to rely on machine learning and AI to handle the complexity involved in assessing hate speech.
That’s why we rely so heavily on our community to identify and report potential hate speech. With billions of posts on our platform — and with the need for context in order to assess the meaning and intent of reported posts — there’s not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech. Our model builds on the eyes and ears of everyone on platform — the people who vigilantly report millions of posts to us each week for all sorts of potential violations. We then have our teams of reviewers, who have broad language expertise and work 24 hours a day across time zones, to apply our hate speech policies.
We’re building up these teams that deal with reported content: over the next year, we’ll add 3,000 people to our community operations team around the world, on top of the 4,500 we have today. We’ll keep learning more about local context and changing language. And, because measurement and reporting are an important part of our response to hate speech, we’re working on better ways to capture and share meaningful data with the public.
Managing a global community in this manner has never been done before, and we know we have a lot more work to do. We are committed to improving — not just when it comes to individual posts, but how we approach discussing and explaining our choices and policies entirely.
Richard Allan is Facebook Vice President for Europe, the Middle East and Africa Public Policy.
Video chats with your friends and family in Messenger just got a whole lot more fun.
Check out the awesome new features – like animated reactions, filters, masks and effects, and the ability to take screenshots – available for one-on-one and group video chats.
You can now share your emotions with a reaction, add a filter to feel like your best self, make someone laugh with a bear mask, and even take pictures of your time together.
Choose one of the five Messenger emoji icons to amplify your emotions and express love, laughter, surprise, sadness or anger.
These reactions will animate onto the screen and then disappear, so you can express yourself in the moment.
To keep the fun alive, most reactions have different versions depending on whether your face is on or off the screen.
Tap the love reaction (like when you’re in a group video chat with your three best friends and someone shares amazing news) when the camera is facing you and tap it again when the camera is facing outward to see the difference.
Add a filter
Look and feel like your best self or express your current mood with Messenger’s new video filters.
Choose from a variety of filters, ranging from subtle lighting tweaks to bold colour changes – like black and white, red, or yellow.
Red not your colour? “Don’t worry, we’ve made sure the live preview allows you to test the filter on yourself before letting others see it,” Facebook said.
Dress up your chats
Masks in Messenger have been available for a while, but they’re even more fun now with a bunch of new ones to choose from.
Some masks have hidden effects, like reacting to your facial movements. (Hint: try opening your mouth while using the rabbit mask…)
“We have also added animated effects, like falling hearts and twinkling stars, to give your video chats expressive flair,” Facebook said.
Check out what happens when you wave your arm in front of the camera while using one of those effects!
Unlike reactions, masks and effects stay on the screen for the duration of the video chat (or until you take them off or switch to another one).
Save and send pictures of your video chats
People like to take screenshots of their video chats and share them with friends – whether that’s one-on-one with your sibling or in a group with your best friends.
Messenger video chat now has a new feature to easily capture and share your memories.
Simply tap the camera icon to take a picture of your video chat to save it to your phone’s camera roll.
From there, decide if you want to post it to your Messenger Day or other social media accounts.
You can also send the picture to the person or group that you’re video chatting with.
So, throw on a love reaction, black and white filter, or flower crown mask and snap a pic with your friends and family.
Facebook Incorporated is reportedly in talks with Hollywood studios about producing originally scripted, TV quality shows.
Wall Street Journal reported yesterday that the social network aims to launching original programming by late summer in what will be the outcome of the talks.
The social media giant has indicated that it is willing to commit to production budgets as high as $3m per episode, in meetings with Hollywood talent agencies, the Journal reported, citing people familiar with the matter.
According to reports, Facebook is hoping to target audiences from ages 13 to 34, with a focus on the 17 to 30 range.
The company has already lined up “Strangers”, a relationship drama, and a game show, “Last State Standing”, the report said.
Although, Facebook is yet to release official statements on the matter, the tech firm is expected to release episodes in a traditional manner, instead of dropping an entire season in one go like Netflix and Amazon.com, WSJ reported.
The company is also willing to share its viewership data with Hollywood, the report added.
Meanwhile, recently, Apple hired co-presidents of Sony Pictures Television, Jamie Erlicht and Zack Van Amburg to lead its video-programming efforts.
Apple began its long-awaited move into original television series last week, with a reality show called “Planet of the Apps”, an unscripted show about developers trying to interest celebrity mentors with a 60-second pitch on an escalator.
The company’s future programming plans include an adaptation of comedian James Corden’s “Carpool Karaoke” segment from his CBS Corporation show that will begin airing in August.
A host of Internet tech giants have teamed up to form a grand alliance known as Global Internet Forum to Counter Terrorism with the aim to help make their hosted consumer services hostile to terrorists and violent extremists.
The spread of terrorism and violent extremism has become a pressing global problem and a critical challenge for all.
“We take these issues very seriously, and each of our companies have developed policies and removal practices that enable us to take a hard line against terrorist or violent extremist content on our hosted consumer services.
“We believe that by working together, sharing the best technological and operational elements of our individual efforts, we can have a greater impact on the threat of terrorist content online,” a statement released by the forum of the tech giants read.
The new forum builds on initiatives including the EU Internet Forum and the Shared Industry Hash Database; discussions with the UK and other governments; and the conclusions of the recent G7 and European Council meetings.
The forum said the scope of its work will evolve over time as there would be need for it to be responsive to the ever-evolving terrorist and extremist tactics.
It said, initially, the scope would include technological solutions that will involve the tech firms working together to refine and improve existing joint technical work such as the Shared Industry Hash Database; exchange best practices as well as develop and implement new content detection and classification techniques using machine learning; and define standard transparency reporting methods for terrorist content removals.
Also, the grand alliance said it will adopt knowledge-sharing in its modus operandi by working with counter-terrorism experts including governments, civil society groups, academics and other companies to engage in shared learning about terrorism, and through a joint partnership with the UN Security Council Counter-Terrorism Executive Directorate (UN CTED) and the ICT4Peace Initiative, it will establish a broad knowledge-sharing network to engage with smaller companies, develop best practices and counter-speech.
Social media giant Facebook has announced it is planning to launch a new video creation app that will empower its creator community in churning out astonishing videos.
Facebook says the video creation app is essentially available to verified accounts owned by journalists, celebrities and other online influencers, Engadget reported on Friday.
Apart from the access to Facebook Live, the new video creation app will have a new “creative kit” that includes tools like special intros and outros to videos, custom stickers, custom frames, among other tools.
It will also have a Community tab, where users can interact with their fans and followers on Facebook, Instagram and Messenger, the report said.
Reportedly, Facebook is also running a “small test” of a video tab in the navigation bar of its flagship mobile applications.
Pressing the tab, which resembles a play button, brings up “an endless stream” of Facebook videos, from pages users follow and videos liked or shared by friends.
According to Engadget, the video creation app was announced at VidCon, a place where video creators from all over the world flock to promote their show as well as to meet their fanbase and get new ideas.
Investigation by Russia’s FSB security agency on Monday has revealed that Telegram messaging service was used by those behind the Saint Petersburg metro bombing.
“During the probe into the April 3 terrorist attack in the Saint Petersburg metro, the FSB received reliable information about the use of Telegram by the suicide bomber, his accomplices and their mastermind abroad to conceal their criminal plans,” the FSB said in a statement.
The FSB statement said that the terrorists used the Telegram app “at each stage of the preparation of this terrorist attack”.
The Saint Petersburg bombing took the lives of fifteen people, and the Imam Shamil Battalion, a group suspected to have links to Al-Qaeda, claimed responsibility of the attack.
Telegram is a free Russian-designed messaging app that lets people exchange messages, photos and videos in groups of up to 5,000.
It has attracted about 100 million users since its launch in 2013.
But the service has drawn the ire of critics who say it can let criminals and terrorists communicate without fear of being tracked by police, pointing in particular to its use by Islamic State jihadists.
The app is one of several targeted in a legal crackdown by Russian authorities on the internet and on social media sites in particular.
Since January 1, internet companies have been required to store all users’ personal data at data centres in Russia and provide it to the authorities on demand.
Draft legislation that has already secured initial backing in Russian parliament would make it illegal for messaging services to have anonymous users, but Telegram’s Russian chief executive, Pavel Durov feels this will compromise the privacy of the app users.
He stressed that compromising the privacy of Telegram’s users would force them, including “high-ranking Russian officials,” to communicate via apps based in the United States like Facebook-owned WhatsApp.
32-year-old Durov created Russia’s popular VKontakte social media site before founding Telegram in the United States.
Facebook is launching a program in the UK to train and fund local organizations to combat extremist material online, as internet companies attempt to clamp down on hate speech and violent content on their services.
Facebook, which outlined new efforts to remove extremist and terrorism content from its social media platform last week, will launch the Online Civil Courage Initiative in the UK on Friday, the company said in a statement.
The new initiative will train non-governmental organizations to help them monitor and respond to extremist content and create a dedicated support desk so they can communicate directly with Facebook, the company said.
“There is no place for hate or violence on Facebook,” said Sheryl Sandberg, Facebook’s chief operating officer. “We use technology like AI to find and remove terrorist propaganda, and we have teams of counterterrorism experts and reviewers around the world working to keep extremist content off our platform.”
The British government has stepped up attacks on Silicon Valley internet companies for not acting quickly enough to take down extremist online propaganda and fostering “safe places” where extremists can breed following a string of attacks in recent months in London and Manchester.
Facebook, Alphabet’s Google, and Twitter have responded by saying they have made heavy investments and employed thousands of people to take down hate speech and violent content over the past two years. Security analysts say the efforts have dramatically reduced the use of these platforms for jihadist recruitment efforts, although more work needs to be done.
Prime Minister Theresa May has sought to enlist British public opinion to force the U.S. internet players to work more closely with the government rather than proposing new legislation or policies to assert greater control over the web.
Earlier this week, May urged fellow European Union leaders at a meeting in Brussels to join her in putting pressure on tech companies to ‘rid terrorist material from the internet in all our languages’.
She called for the internet companies to shift from reactively removing content when they are notified of it, toward greater use of automatic detection and removal tools – and ultimately preventing it from appearing on their platforms in the first place.
Facebook on Wednesday announced it is piloting new tools that can prevent misuse of profile pictures. The social giant said that the feature, Photo Guard was rolled out after feedback they received from users of the platform.
The new feature gives more control to users by limiting who can download and share their profile pictures.
Aarati Soman, Facebook Product Manager, announced the new Photo Guard tool, aka Profile Picture Guard.
Additionally, she revealed Facebook is also introducing designs to profile pictures which the company’s research has shown helpful in deterring misuse.
Facebook says users will start seeing a step-by-step guide to add an optional profile picture guard, and that once applied, the profile photo can no longer be downloaded, shared, or sent in a message on Facebook.
Additionally, people who are not friends on Facebook won’t be able to tag anyone, including themselves, in your profile picture.
Facebook also says that it will prevent others from taking a screenshot of your profile picture on Facebook where possible.
This feature is currently available only on Android devices.
With the new Photo Guard, users who opt for the tool will see a blue border and shield around their profile pictures as a visual cue of protection.
How to use the new Facebook Photo Guard feature:
Method 1
Refresh your News Feed, and you may see a message (pictured below), prompting you to Help Protect Your Profile Picture
Tap Turn On Profile Picture Guard
You will then see a screen explaining the benefits of the Profile Picture Guard
Click Next
You will then see your current profile photo, complete with the shield symbol, with the option to Save
Method 2
Open your Facebook profile.
Tap your profile photo
You will then see the option (pictured below) to Turn on profile picture guard
If selected, you will get the option to Save, and then see your profile photo with the shield symbol
As for the ability to add a design to your profile photo, the option shows up as a prompt on your News Feed, just like Method 1 above.
You will see the message Add a Design to Your Profile Picture, followed by the option to Add Design.
Once you tap this, you can then select from a range of design overlays, and then click Next. The steps are pictured below.
Facebook says that these designs will make it easier for users to report misuse of their profile picture.