Tag: Hate Speech

  • Hate Speech: Wrong narrative for national discourse, integration by Ayobami Akanji

    Hate Speech: Wrong narrative for national discourse, integration by Ayobami Akanji

    By Ayobami Akanji

    The recent potential threat on national unity posed by hate speeches emanating from various parts of the country has created anxiety and doubts on whether or not Nigeria will still remain a sovereign entity.

    An undiscerning mind can easily draw the conclusion that these threats posed real and potential danger, especially when viewed against the backdrop that those who expected to speak against such threats in order to douse the tension it generated are either keeping quiet or reacting a bit too late.

    Such situation gives an incline and suspicion that the elite, who are expected to immediately condemn the hate speeches, are either in support of the utterances or are sponsoring such activities because of incompatible group interests. Another obvious temptation is the possibility to conclude that most Nigerian elite are pursuing sectional interests rather than national interest.

    As Nigerians and foreigners alike pondered on what will become the fate of Nigeria when the hate speeches were taking its toll on the country, a glimmer of hope that it was not bad after all manifested when leaders and stakeholders from the South East met with Acting President Yemi Osinbajo.

    At the meeting the South East leaders insisted that the unity of Nigeria is not negotiable and this signified that all hope is not last and that the situation is still redeemable. Apart from the meeting with the South East leaders, Osinbajo’s engagement with leaders of Northern extraction and their counterparts from other geo-political zones produced similar results. The engagements and the unifying words showed the power of persuasion and responsiveness of the Federal Government and the roles in could play in dousing tension in the country.

    The outcome of all the engagements showed that the Nigerian State has come to stay. However, the only observed challenge is how the different ethnic, religious and cultural entities that make up Nigeria will be accommodated in such a way that no group will feel alienated or marginalized in resource allocation, welfare, security of lives and property.

    Over the years, successive administrations in Nigeria made efforts to foster national unity. A look at various universities and unity schools in the country showed that students from respective parts of the country studied under the same academic environment. The National Youth Service Corps (NYSC) also stands out as one scheme that has ensured national integration for several decades. The NYSC policy came to existence in pursuance of national consciousness and patriotism.

    Based on recent developments, it behooves that the propagation of the negative sides of the nation’s history should not be the point of focus in national discourse rather, the collective efforts of its past and present efforts should be upheld and propagated in the spirit of oneness and collective development.

    A poignant question to ask is why must Nigeria remain one? The nation is the only African nation that possesses immense human and natural resources that attract most attention from the international community especially the super powers. Nigerians must also be mindful of the clandestine agenda of those nations that are envious of the dividends of our diversity.

    Ghana, Togo, Cote d’ Ivoire and Senegal overcame secession threats at different points in their history and this should be a lesson for Nigeria since it is the mouth piece of the African continent. For the nation to continue enjoying such status, political stability, peace, security and development are key qualities.

    It is important to underscore the fact that no African nation, split through referendum or by civil war has really achieved high level of security and development. Most of the countries that experienced wars or civil strife still spend scarce resources to processes and purchase arms to fight against insurgencies. Nigeria should avoid such situation considering that fact that the nation survived a civil war that lasted three years.

    Inferences could be drawn from Libya and South-Sudan. Both countries depict the gloomy picture of divided nations torn apart by strife.

    The implementation of a dynamic policy of unity of purpose which the current leadership is leaning towards will ensure that all Nigerians have a better understanding of the collective interest. The full implementation of the strategy will halt permanently agitations for secession and consolidate the much-desired unity of the nation.

    Any move to cause war or civil strife in Nigeria should be avoided because of its negative impact on the growth and development of the nation. George Kennan, an American diplomat and strategist who captures the frightful impact of hate speech purveyors said: “War has a momentum of its own, you know where you begin. You never know where you are going to end.’’

     

    Akanji, a political strategist, wrote from Abuja

  • Instagram: How to use new AI mechanism to block offensive comments

    How to use Instagram AI mechanism to block offensive commentsFacebook-owned photo-sharing service, Instagram has released an artificial intelligence mechanism which will serve to filter off offensive comments and spam.

    “Many of you have told us that toxic comments discourage you from enjoying Instagram and expressing yourself freely,” Co-founder and Chief Executive Kevin Systrom said in a blog post yesterday.

    “To help, we’ve developed a filter that will block certain offensive comments on posts and in live video,” he added.

    This is coming as Internet companies, including Facebook, are working to curb trolls, hate speech and the spread of violent ideology on their platforms.

    Instagram said that the filter scanning for spam is designed to work in Arabic, Chinese, English, French, German, Japanese, Portuguese, Russian and Spanish; and according to Systrom, the offensive-comment filter will launch first in English.

    “Our team has been training our systems for some time to recognize certain types of offensive and spammy comments so you never have to see them,” Systrom said, adding: “The tools will improve over time”.

    He further stated that “All other comments will appear as they normally do and you can still report comments, delete comments or turn them off”.

    To access the comments filter, click the “…” settings menu from the profile and scroll to tap “Comments”. [See illustration below].

    Comment settings offensive comments filter

    Facebook, Microsoft, Twitter and YouTube this week announced the launch of an anti-terror partnership aimed at thwarting the spread of extremist content online.

    The Global Internet Forum to Counter Terrorism intends to share engineering, research and knowledge to help “continue to make our hosted consumer services hostile to terrorists and violent extremists,” the companies said as reported.

    Each of the technology giants has been working individually to prevent its platforms or services from being used to promote or spread extremist views.

     

  • Hard Questions: Who should decide what is hate speech in an online global community?

    By Richard Allan

    As more and more communication takes place in digital form, the full range of public conversations are moving online — in groups and broadcasts, in text and video, even with emoji. These discussions reflect the diversity of human experience: some are enlightening and informative, others are humorous and entertaining, and others still are political or religious. Some can also be hateful and ugly. Most responsible communications platforms and systems are now working hard to restrict this kind of hateful content.

    Richard Allan
    Richard Allan, Facebook VP Public Policy EMEA

    Facebook is no exception. We are an open platform for all ideas, a place where we want to encourage self-expression, connection and sharing. At the same time, when people come to Facebook, we always want them to feel welcome and safe. That’s why we have rules against bullying, harassing and threatening someone.

    But what happens when someone expresses a hateful idea online without naming a specific person? A post that calls all people of a certain race “violent animals” or describes people of a certain sexual orientation as “disgusting” can feel very personal and, depending on someone’s experiences, could even feel dangerous. In many countries around the world, those kinds of attacks are known as hate speech. We are opposed to hate speech in all its forms, and don’t allow it on our platform.

    In this post we want to explain how we define hate speech and approach removing it — as well as some of the complexities that arise when it comes to setting limits on speech at a global scale, in dozens of languages, across many cultures. Our approach, like those of other platforms, has evolved over time and continues to change as we learn from our community, from experts in the field, and as technology provides us new tools to operate more quickly, more accurately and precisely at scale.

    Defining Hate Speech

    The first challenge in stopping hate speech is defining its boundaries.

    People come to Facebook to share their experiences and opinions, and topics like gender, nationality, ethnicity and other personal characteristics are often a part of that discussion. People might disagree about the wisdom of a country’s foreign policy or the morality of certain religious teachings, and we want them to be able to debate those issues on Facebook. But when does something cross the line into hate speech?

    Our current definition of hate speech is anything that directly attacks people based on what are known as their “protected characteristics” — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.

    There is no universally accepted answer for when something crosses the line. Although a number of countries have laws against hate speech, their definitions of it vary significantly.

    In Germany, for example, laws forbid incitement to hatred; you could find yourself the subject of a police raid if you post such content online. In the US, on the other hand, even the most vile kinds of speech are legally protected under the US Constitution.

    People who live in the same country — or next door — often have different levels of tolerance for speech about protected characteristics. To some, crude humor about a religious leader can be considered both blasphemy and hate speech against all followers of that faith. To others, a battle of gender-based insults may be a mutually enjoyable way of sharing a laugh. Is it OK for a person to post negative things about people of a certain nationality as long as they share that same nationality? What if a young person who refers to an ethnic group using a racial slur is quoting from lyrics of a song?

    There is very important academic work in this area that we follow closely. Timothy Garton Ash, for example, has created the Free Speech Debate to look at these issues on a cross-cultural basis. Susan Benesch established the Dangerous Speech Project, which investigates the connection between speech and violence. These projects show how much work is left to be done in defining the boundaries of speech online, which is why we’ll keep participating in this work to help inform our policies at Facebook.

    Enforcement

    We’re committed to removing hate speech any time we become aware of it. Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week — that’s around 288,000 posts a month globally. (This includes posts that may have been reported for hate speech but deleted for other reasons, although it doesn’t include posts reported for other reasons but deleted for hate speech.*)

    But it’s clear we’re not perfect when it comes to enforcing our policy. Often there are close calls — and too often we get it wrong.

    Sometimes, it’s obvious that something is hate speech and should be removed – because it includes the direct incitement of violence against protected characteristics, or degrades or dehumanizes people. If we identify credible threats of imminent violence against anyone, including threats based on a protected characteristic, we also escalate that to local law enforcement.

    But sometimes, there isn’t a clear consensus — because the words themselves are ambiguous, the intent behind them is unknown or the context around them is unclear. Language also continues to evolve, and a word that was not a slur yesterday may become one today.

    Here are some of the things we take into consideration when deciding what to leave on the site and what to remove.

    Context

    What does the statement “burn flags not fags” mean? While this is clearly a provocative statement on its face, should it be considered hate speech? For example, is it an attack on gay people, or an attempt to “reclaim” the slur? Is it an incitement of political protest through flag burning? Or, if the speaker or audience is British, is it an effort to discourage people from smoking cigarettes (fag being a common British term for cigarette)? To know whether it’s a hate speech violation, more context is needed.

    Often the most difficult edge cases involve language that seems designed to provoke strong feelings, making the discussion even more heated — and a dispassionate look at the context (like country of speaker or audience) more important. Regional and linguistic context is often critical, as is the need to take geopolitical events into account. In Myanmar, for example, the word “kalar” has benign historic roots, and is still used innocuously across many related Burmese words. The term can however also be used as an inflammatory slur, including as an attack by Buddhist nationalists against Muslims. We looked at the way the word’s use was evolving, and decided our policy should be to remove it as hate speech when used to attack a person or group, but not in the other harmless use cases. We’ve had trouble enforcing this policy correctly recently, mainly due to the challenges of understanding the context; after further examination, we’ve been able to get it right. But we expect this to be a long-term challenge.

    In Russia and Ukraine, we faced a similar issue around the use of slang words the two groups have long used to describe each other. Ukrainians call Russians “moskal,” literally “Muscovites,” and Russians call Ukrainians “khokhol,” literally “topknot.” After conflict started in the region in 2014, people in both countries started to report the words used by the other side as hate speech. We did an internal review and concluded that they were right. We began taking both terms down, a decision that was initially unpopular on both sides because it seemed restrictive, but in the context of the conflict felt important to us.

    Often a policy debate becomes a debate over hate speech, as two sides adopt inflammatory language. This is often the case with the immigration debate, whether it’s about the Rohingya in South East Asia, the refugee influx in Europe or immigration in the US. This presents a unique dilemma: on the one hand, we don’t want to stifle important policy conversations about how countries decide who can and can’t cross their borders. At the same time, we know that the discussion is often hurtful and insulting.

    When the influx of migrants arriving in Germany increased in recent years, we received feedback that some posts on Facebook were directly threatening refugees or migrants. We investigated how this material appeared globally and decided to develop new guidelines to remove calls for violence against migrants or dehumanizing references to them — such as comparisons to animals, to filth or to trash. But we have left in place the ability for people to express their views on immigration itself. And we are deeply committed to making sure Facebook remains a place for legitimate debate.

    Intent

    People’s posts on Facebook exist in the larger context of their social relationships with friends. When a post is flagged for violating our policies on hate speech, we don’t have that context, so we can only judge it based on the specific text or images shared. But the context can indicate a person’s intent, which can come into play when something is reported as hate speech.

    There are times someone might share something that would otherwise be considered hate speech but for non-hateful reasons, such as making a self-deprecating joke or quoting lyrics from a song. People often use satire and comedy to make a point about hate speech.

    Or they speak out against hatred by condemning someone else’s use of offensive language, which requires repeating the original offense. This is something we allow, even though it might seem questionable since it means some people may encounter material disturbing to them. But it also gives our community the chance to speak out against hateful ideas. We revised our Community Standards to encourage people to make it clear when they’re sharing something to condemn it, but sometimes their intent isn’t clear, and anti-hatred posts get removed in error.

    On other occasions, people may reclaim offensive terms that were used to attack them. When someone uses an offensive term in a self-referential way, it can feel very different from when the same term is used to attack them. For example, the use of the word “dyke” may be considered hate speech when directed as an attack on someone on the basis of the fact that they are gay. However, if someone posted a photo of themselves with #dyke, it would be allowed. Another example is the word “faggot.” This word could be considered hate speech when directed at a person, but, in Italy, among other places, “frocio” (“faggot”) is used by LGBT activists to denounce homophobia and reclaim the word. In these cases, removing the content would mean restricting someone’s ability to express themselves on Facebook.

    Mistakes

    If we fail to remove content that you report because you think it is hate speech, it feels like we’re not living up to the values in our Community Standards. When we remove something you posted and believe is a reasonable political view, it can feel like censorship. We know how strongly people feel when we make such mistakes, and we’re constantly working to improve our processes and explain things more fully.

    Our mistakes have caused a great deal of concern in a number of communities, including among groups who feel we act — or fail to act — out of bias. We are deeply committed to addressing and confronting bias anywhere it may exist. At the same time, we work to fix our mistakes quickly when they happen.

    Last year, Shaun King, a prominent African-American activist, posted hate mail he had received that included vulgar slurs. We took down Mr. King’s post in error — not recognizing at first that it was shared to condemn the attack. When we were alerted to the mistake, we restored the post and apologized. Still, we know that these kinds of mistakes are deeply upsetting for the people involved and cut against the grain of everything we are trying to achieve at Facebook.

    Continuing To Improve

    People often ask: can’t artificial intelligence solve this? Technology will continue to be an important part of how we try to improve. We are, for example, experimenting with ways to filter the most obviously toxic language in comments so they are hidden from posts. But while we’re continuing to invest in these promising advances, we’re a long way from being able to rely on machine learning and AI to handle the complexity involved in assessing hate speech.

    That’s why we rely so heavily on our community to identify and report potential hate speech. With billions of posts on our platform — and with the need for context in order to assess the meaning and intent of reported posts — there’s not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech. Our model builds on the eyes and ears of everyone on platform — the people who vigilantly report millions of posts to us each week for all sorts of potential violations. We then have our teams of reviewers, who have broad language expertise and work 24 hours a day across time zones, to apply our hate speech policies.

    We’re building up these teams that deal with reported content: over the next year, we’ll add 3,000 people to our community operations team around the world, on top of the 4,500 we have today. We’ll keep learning more about local context and changing language. And, because measurement and reporting are an important part of our response to hate speech, we’re working on better ways to capture and share meaningful data with the public.

    Managing a global community in this manner has never been done before, and we know we have a lot more work to do. We are committed to improving — not just when it comes to individual posts, but how we approach discussing and explaining our choices and policies entirely.

     

     

    Richard Allan is Facebook Vice President for Europe, the Middle East and Africa Public Policy.

     

  • Facebook, Microsoft, Twitter, YouTube form grand alliance

    A host of Internet tech giants have teamed up to form a grand alliance known as Global Internet Forum to Counter Terrorism with the aim to help make their hosted consumer services hostile to terrorists and violent extremists.

    The spread of terrorism and violent extremism has become a pressing global problem and a critical challenge for all.

    “We take these issues very seriously, and each of our companies have developed policies and removal practices that enable us to take a hard line against terrorist or violent extremist content on our hosted consumer services.

    “We believe that by working together, sharing the best technological and operational elements of our individual efforts, we can have a greater impact on the threat of terrorist content online,” a statement released by the forum of the tech giants read.

    The new forum builds on initiatives including the EU Internet Forum and the Shared Industry Hash Database; discussions with the UK and other governments; and the conclusions of the recent G7 and European Council meetings.

    The forum said the scope of its work will evolve over time as there would be need for it to be responsive to the ever-evolving terrorist and extremist tactics.

    It said, initially, the scope would include technological solutions that will involve the tech firms working together to refine and improve existing joint technical work such as the Shared Industry Hash Database; exchange best practices as well as develop and implement new content detection and classification techniques using machine learning; and define standard transparency reporting methods for terrorist content removals.

    Also, the grand alliance said it will adopt knowledge-sharing in its modus operandi by working with counter-terrorism experts including governments, civil society groups, academics and other companies to engage in shared learning about terrorism, and through a joint partnership with the UN Security Council Counter-Terrorism Executive Directorate (UN CTED) and the ICT4Peace Initiative, it will establish a broad knowledge-sharing network to engage with smaller companies, develop best practices and counter-speech.

     

     

    Source

     

  • Elites, masses are only two major tribes in Nigeria – Emir Sanusi

    *My grandfather was a Northerner, I am a Nigerian…

     

    The Emir of Kano, Emir Muhammadu Sanusi II,has on Wednesday condemned groups that have resorted to ‘hate speech’ along ethnic lines.

    In a series of posts on Instagram, Sanusi said that due to regional separation, we still speak in the language of 1953.

    The Emir, condemning ethnicity/tribalism, said we need to stop thinking along ethnic lines and embrace the creation called Nigeria.

    He starts by saying that he is “Fulani. My grandfather was an Emir also fulani my uncle and guardian was also the immediate late Emir of Kano Alhaji Ado Bayero and therefore I represent all that has been talked about this afternoon.
    He said the language of the generation older than him was to use regions to refer to themselves, but that he is a Nigerian.

    “My grandfather was a Northerner, I am a Nigerian, he wrote.

    “The problem with this country is that in 2009, we speak in the language of 1953. Sir Olaniwun can be forgiven for the way he spoke, but I cannot forgive people of my generation speaking in that language.

    “Let us go into this issue because there are so many myths that are being bandied around. Before colonialism, there was nothing like Northern Nigeria, Before the Sokoto Jihad, there was nothing like the Sokoto caliphate.

    (1) Let me start by saying that I am Fulani. My grandfather was an Emir also fulani my uncle and guardian was also the immediate late Emir of Kano Alhaji Ado Bayero and therefore I represent all that has been talked about this afternoon. Sir Ajayi has written a book. And like all Nigerians of his generation, he has written in the language of his generation. My grandfather was a Northerner, I am a Nigerian. The problem with this country is that in 2009, we speak in the language of 1953. Sir Olaniwun can be forgiven for the way he spoke, but I cannot forgive people of my generation speaking in that language. Let us go into this issue because there are so many myths that are being bandied around. Before colonialism, there was nothing like Northern Nigeria, Before the Sokoto Jihad, there was nothing like the Sokoto caliphate. The man from Kano regard himself as bakane. The man from Zaria was bazazzage. The man from Katsina was bakatsine. The kingdoms were at war with each other. They were Hausas, they were Muslims, they were killing each other. The Yoruba were Ijebu, Owo, Ijesha, Akoko, Egba. When did they become one? When did the North become one? You have the Sokoto Caliphate that brought every person from Adamawa to Sokoto and said it is one kingdom. They now said it was a Muslim North. The Colonialists came, put that together and said it is now called the Northern Nigeria. Do you know what happened? Our grand fathers were able to transform to being Northerners. We have not been able to transform to being Nigerians. The fault is ours. Tell me, how many governors has South West produced after Awolowo that are role models of leadership? How many governors has the East produced like Nnamdi Azikiwe that can be role models of leadership? How Many governors in the Niger Delta are role models of leadership? Tell me. There is no evidence statistically that any part of this country has produced good leaders. You talk about Babangida and the problems of our economy. Who were the people in charge of the economy during Babangida era? Olu Falae, Kalu Idika Kalu.

    A post shared by Muhammad Sanusi II (@sanusilamidosanusi) on

    “The man from Kano regard himself as bakane. The man from Zaria was bazazzage. The man from Katsina was bakatsine. The kingdoms were at war with each other. They were Hausas, they were Muslims, they were killing each other.

    “The Yoruba were Ijebu, Owo, Ijesha, Akoko, Egba. When did they become one? When did the North become one? You have the Sokoto Caliphate that brought every person from Adamawa to Sokoto and said it is one kingdom. They now said it was a Muslim North.

    “The Colonialists came, put that together and said it is now called the Northern Nigeria. Do you know what happened? Our grandfathers were able to transform to being Northerners. We have not been able to transform to being Nigerians. The fault is ours.

    “Tell me, how many governors has South West produced after Awolowo that are role models of leadership? How many governors has the East produced like Nnamdi Azikiwe that can be role models of leadership? How Many governors in the Niger Delta are role models of leadership? Tell me.

    “There is no evidence statistically that any part of this country has produced good leaders.

    “You talk about Babangida and the problems of our economy. Who were the people in charge of the economy during Babangida era? Olu Falae, Kalu Idika Kalu.”

    He stated that it is hypocritical to talk about ethnicity only when it pleases us.

    He said, “we talk ethnicity when it pleases us. It is hypocrisy. You said elections were rigged in 1959, Obasanjo and Maurice Iwu rigged election in 2007.

    “Was it a Southern thing? It was not. The problem is: everywhere in this country, there is one Hausa, Ibo, Yoruba and Itshekiri man whose concern is how to get his hands on the pile and how much he can steal.

    “Whether it is in the military or in the civilian government, they sit down, they eat together. In fact, the constitution says there must be a minister from every state.

    “So, anybody that is still preaching that the problem of Nigeria is Yoruba or Hausa or Fulani, he does not love Nigeria.

    “The problem with Nigeria is that a group of people from each and every ethnic tribe is very selfish. The poverty that is found in Maiduguri is even worse than any poverty that you find in any part of the South.”

  • London terror attack: Internet must be regulated to stop terrorism – May

    London terror attack: Internet must be regulated to stop terrorism – May

    Prime Minister Theresa May has called for closer regulation of the internet following a deadly terror attack in London.

    At least seven people were killed in a short but violent assault that unfolded late Saturday night in the heart of the capital, the third such attack to hit Britain this year.

    May said on Sunday that a new approach to tackling extremism is required, including changes that would deny terrorists and extremist sympathizers digital tools used to communicate and plan attacks.

    “We cannot allow this ideology the safe space it needs to breed,” May said. “Yet that is precisely what the internet and the big companies that provide internet-based services provide.”

    “We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning,” she continued. “We need to do everything we can at home to reduce the risks of extremism online.”

    May’s call for new internet regulations was part of a larger strategy to combat terror, including what she described as “far too much tolerance of extremism in our country.”

    The attack comes as tech giants come under increased pressure in Europe over their policing of violent and hate speech.

    Europe’s top regulator released data last week that showed that Twitter has failed to take down a majority of hate speech posts after they had been flagged. Facebook and YouTube fared better, removing 66% of reported hate speech.

    In the U.K., a parliamentary committee report published last month alleged that social media firms have prioritized profit over user safety by continuing to host unlawful content. The report also called for “meaningful fines” if the companies do not quickly improve.

    “The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content,” the report said. “Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law.”

    Forty-eight people were injured in Saturday’s attack on London Bridge and Borough Market. Police officers pursued and shot dead three attackers within eight minutes of the first emergency call, London police said.

     

     

    CNN