Welcome to the Net Muslims Forums.
Page 4 of 4 FirstFirst 1234
Results 61 to 69 of 69
  1. #61
    Member Array
    Join Date
    Jan 2007
    Location
    USA
    Posts
    9,969

    Default

    Facebook Crosses The Line With New Facebook Messenger App

    8/7/14



    First, this is VERY important to read and understand. I’m doing my best to look out for all the Facebook Users who aren’t as tech savvy as their kids or friends. I’m trying to help explain what’s happening because if I don’t…nobody else will!


    If you’re anything like your neighbor…you probably use Facebook on your phone WAY more than you use it on a computer. You’ve been sending messages from the Facebook app and it probably always asks you if you want to install the Facebook Messenger App.


    Its always been OPTIONAL but coming soon to your Facebook experience….it won’t be an option…it will be mandatory if you care to send messages from your phone.


    No big deal one might think…but the part that the average Facebook User doesn’t realize is the permissions you must give to Facebook in order to use the Facebook Messenger App. Here is a short list of the most disturbing permissions it requires and a quick explanation of what it means to you and your privacy.



    • Change the state of network connectivity – This means that Facebook can change or alter your connection to the Internet or cell service. You’re basically giving Facebook the ability to turn features on your phone on and off for its own reasons without telling you.
    • Call phone numbers and send SMS messages – This means that if Facebook wants to…it can send text messages to your contacts on your behalf. Do you see the trouble in this? Who is Facebook to be able to access and send messages on your phone? You’re basically giving a stranger your phone and telling them to do what they want when they want!
    • Record audio, and take pictures and videos, at any time – Read that line again….RECORD audio…TAKE pictures….AT ANY TIME!! That means that the folks at Facebook can see through your lens on your phone whenever they want..they can listen to what you’re saying via your microphone if they choose to!!
    • Read your phone’s call log, including info about incoming and outgoing calls – Who have you been calling? How long did you talk to them? Now Facebook will know all of this because you’ve downloaded the new Facebook messenger app.
    • Read your contact data, including who you call and email and how often – Another clear violation of your privacy. Now Facebook will be able to read e-mails you’ve sent and take information from them to use for their own gain. Whether it’s for “personalized advertisements” or if it’s for “research purposes” ….whatever the reason..they’re accessing your private encounters.
    • Read personal profile information stored on your device – This means that if you have addresses, personal info, pictures or anything else that’s near and dear to your personal life…they can read it.
    • Get a list of accounts known by the phone, or other apps you use – Facebook will now have a tally of all the apps you use, how often you use them and what information you keep or exchange on those apps.


    Hopefully, you take this as serious as I do…after reading more about it and studying the permissions I have now deleted the app from my phone and don’t intend to use it ever again. I still have my Facebook app but I just won’t use the messaging feature unless I’m at a computer. Even then, I might not use messaging anymore.


    With these kinds of privacy invasions I think Facebook is pushing the limits to what people will let them get away with. I remember when the Internet first began its march toward socializing dominance when AOL would send us CD’s for free trials every week. On AOL, we made screen names that somewhat hid our identities and protected us against the unseen dangers online. Now, it seems that we’ve forgotten about that desire to protect our identity and we just lay down and let them invade our privacy.


    There may be no turning back at this point because many people won’t read this or investigate the permissions of Facebook’s new mandatory app but at least I can say I tried to help us put up a fight. Pass this along to your friends and at least try to let them know what they’re getting into.

    http://thebull.cbslocal.com/2014/08/...messenger-app/

  2. #62
    Member Array
    Join Date
    Jan 2007
    Location
    USA
    Posts
    9,969

    Default

    Your New Facebook "Friend" may be the FBI

    3/16/2010



    The Feds are on Facebook. And MySpace, LinkedIn and Twitter, too.

    U.S. law enforcement agents are following the rest of the Internet world into popular social-networking services, going undercover with false online profiles to communicate with suspects and gather private information, according to an internal Justice Department document that offers a tantalizing glimpse of issues related to privacy and crime-fighting.

    Think you know who's behind that "friend" request? Think again. Your new "friend" just might be the FBI.

    The document, obtained in a Freedom of Information Act lawsuit, makes clear that U.S. agents are already logging on surreptitiously to exchange messages with suspects, identify a target's friends or relatives and browse private information such as postings, personal photographs and video clips.

    Among other purposes: Investigators can check suspects' alibis by comparing stories told to police with tweets sent at the same time about their whereabouts. Online photos from a suspicious spending spree — people posing with jewelry, guns or fancy cars — can link suspects or their friends to robberies or burglaries.

    The Electronic Frontier Foundation, a San Francisco-based civil liberties group, obtained the Justice Department document when it sued the agency and five others in federal court. The 33-page document underscores the importance of social networking sites to U.S. authorities. The foundation said it would publish the document on its Web site on Tuesday.

    With agents going undercover, state and local police coordinate their online activities with the Secret Service, FBI and other federal agencies in a strategy known as "deconfliction" to keep out of each other's way.

    "You could really mess up someone's investigation because you're investigating the same person and maybe doing things that are counterproductive to what another agency is doing," said Detective Frank Dannahey of the Rocky Hill, Conn., Police Department, a veteran of dozens of undercover cases.

    A decade ago, agents kept watch over AOL and MSN chat rooms to nab sexual predators. But those text-only chat services are old-school compared with today's social media, which contain mountains of personal data, photographs, videos and audio clips — a potential treasure trove of evidence for cases of violent crime, financial fraud and much more.

    The Justice Department document, part of a presentation given in August by top cybercrime officials, describes the value of Facebook, Twitter, MySpace, LinkedIn and other services to government investigators. It does not describe in detail the boundaries for using them.

    "It doesn't really discuss any mechanisms for accountability or ensuring that government agents use those tools responsibly," said Marcia Hoffman, a senior attorney with the civil liberties foundation.

    The group sued in Washington to force the government to disclose its policies for using social networking sites in investigations, data collection and surveillance.

    Covert investigations on social-networking services are legal and governed by internal rules, according to Justice Department officials. But they would not say what those rules are.

    The Justice Department document raises a legal question about a social-media bullying case in which U.S. prosecutors charged a Missouri woman with computer fraud for creating a fake MySpace account — effectively the same activity that undercover agents are doing, although for different purposes.

    The woman, Lori Drew, helped create an account for a fictitious teen boy on MySpace and sent flirtatious messages to a 13-year-old neighborhood girl in his name. The girl hanged herself in October 2006, in a St. Louis suburb, after she received a message saying the world would be better without her.

    A jury in California, where MySpace has its servers, convicted Drew of three misdemeanor counts of accessing computers without authorization because she was accused of violating MySpace's rules against creating fake accounts. But last year a judge overturned the verdicts, citing the vagueness of the law.

    "If agents violate terms of service, is that 'otherwise illegal activity'?" the document asks. It doesn't provide an answer.

    Facebook's rules, for example, specify that users "will not provide any false personal information on Facebook, or create an account for anyone other than yourself without permission." Twitter's rules prohibit its users from sending deceptive or false information. MySpace requires that information for accounts be "truthful and accurate."

    A former U.S. cybersecurity prosecutor, Marc Zwillinger, said investigators should be able to go undercover in the online world the same way they do in the real world, even if such conduct is barred by a company's rules. But there have to be limits, he said.

    In the face-to-face world, agents can't impersonate a suspect's spouse, child, parent or best friend. But online, behind the guise of a social-networking account, they can.

    "This new situation presents a need for careful oversight so that law enforcement does not use social networking to intrude on some of our most personal relationships," said Zwillinger, whose firm does legal work for Yahoo and MySpace.

    Undercover operations aren't necessary if the suspect is reckless. Federal authorities nabbed a man wanted on bank fraud charges after he started posting Facebook updates about the fun he was having in Mexico.

    Maxi Sopo, a native of Cameroon living in the Seattle area, apparently slipped across the border into Mexico in a rented car last year after learning that federal agents were investigating the alleged scheme. The agents initially could find no trace of him on social media sites, and they were unable to pin down his exact location in Mexico. But they kept checking and eventually found Sopo on Facebook.

    While Sopo's online profile was private, his list of friends was not. Assistant U.S. Attorney Michael Scoville began going through the list and was able to learn where Sopo was living. Mexican authorities arrested Sopo in September. He is awaiting extradition to the U.S.

    The Justice document describes how Facebook, MySpace and Twitter have interacted with federal investigators: Facebook is "often cooperative with emergency requests," the government said. MySpace preserves information about its users indefinitely and even stores data from deleted accounts for one year. But Twitter's lawyers tell prosecutors they need a warrant or subpoena before the company turns over customer information, the document says.

    "Will not preserve data without legal process," the document says under the heading, "Getting Info From Twitter ... the bad news."

    Twitter did not respond to a request for comment for this story.

    The chief security officer for MySpace, Hemanshu Nigam, said MySpace doesn't want to be the company that stands in the way of an investigation.

    "That said, we also want to make sure that our users' privacy is protected and any data that's disclosed is done under proper legal process," Nigam said.

    MySpace requires a search warrant for private messages less than six months old, according to the company.

    Facebook spokesman Andrew Noyes said the company has put together a handbook to help law enforcement officials understand "the proper ways to request information from Facebook to aid investigations."

    The Justice document includes sections about its own lawyers. For government attorneys taking cases to trial, social networks are a "valuable source of info on defense witnesses," they said. "Knowledge is power. ... Research all witnesses on social networking sites."

    But the government warned prosecutors to advise their own witnesses not to discuss cases on social media sites and to "think carefully about what they post."

    It also cautioned federal law enforcement officials to think prudently before adding judges or defense counsel as "friends" on these services.

    "Social networking and the courtroom can be a dangerous combination," the government said.


    http://www.nbcnews.com/id/35890739/n.../#.U_oO0vldV8G

  3. #63
    Member Array
    Join Date
    Jan 2007
    Location
    USA
    Posts
    9,969

    Default

    The US government can brand you a terrorist based on a Facebook post.




    The US government’s web of surveillance is vast and interconnected. Now we know just how opaque, inefficient and discriminatory it can be.

    As we were reminded again just this week, you can be pulled into the National Security Agency’s database quietly and quickly, and the consequences can be long and enduring. Through ICREACH, a Google-style search engine created for the intelligence community, the NSA provides data on private communications to 23 government agencies. More than 1,000 analysts had access to that information.

    This kind of data sharing, however, isn’t limited to the latest from Edward Snowden’s NSA files. It was confirmed earlier this month that the FBI shares its master watchlist, the Terrorist Screening Database, with at least 22 foreign governments, countless federal agencies, state and local law enforcement, plus private contractors.

    The watchlist tracks “known” and “suspected” terrorists and includes both foreigners and Americans. It’s also based on loose standards and secret evidence, which ensnares innocent people. Indeed, the standards are so low that the US government’s guidelines specifically allow for a single, uncorroborated source of information – including a Facebook or Twitter post – to serve as the basis for placing you on its master watchlist.

    Of the 680,000 individuals on that FBI master list, roughly 40% have “no recognized terrorist group affiliation”, according to the Intercept. These individuals don’t even have a connection – as the government loosely defines it – to a designated terrorist group, but they are still branded as suspected terrorists.

    The absurdities don’t end there. Take Dearborn, Michigan, a city with a population under 100,000 that is known for its large Arab American community – and has more watchlisted residents than any other city in America except New York.

    These eye-popping numbers are largely the result of the US government’s use of a loose standard – so-called “reasonable suspicion” – in determining who, exactly, can be watchlisted.

    Reasonable suspicion is such a low standard because it requires neither “concrete evidence” nor “irrefutable evidence”. Instead, an official is permitted to consider “reasonable inferences” and “to draw from the facts in light of his/her experience”.

    Consider a real world context – actual criminal justice – where an officer needs reasonable suspicion to stop a person in the street and ask him or her a few questions. Courts have controversially held that avoiding eye contact with an officer, traveling alone, and traveling late at night, for example, all amount to reasonable suspicion.

    This vague criteria is now being used to label innocent people as terrorism suspects.

    Moreover, because the watchlist isn’t limited to known, actual terrorists, an official can watchlist a person if he has reasonable suspicion to believe that the person is a suspected terrorist. It’s a circular logic – individuals can be watchlisted if they are suspected of being suspected terrorists – that is ultimately backwards, and must be changed.

    The government’s self-mandated surveillance guidance also includes loopholes that permit watchlisting without even showing reasonable suspicion. For example, non-citizens can be watchlisted for being associated with a watchlisted person – even if their relationship with that person is entirely innocuous. Another catch-all exception allows non-citizens to be watchlisted, so long as a source or tipster describes the person as an “extremist”, a “militant”, or in similar terms, and the “context suggests a nexus to terrorism”. The FBI’s definition of “nexus”, in turn, is far more nebulous than they’re letting on.

    Because the watchlist designation process is secret, there’s no way of knowing just how many innocent people are added to the list due to these absurdities and loopholes. And yet, history shows that innocent people are inevitably added to the list and suffer life-altering consequences. Life on the master watchlist can trigger enhanced screening at borders and airports; being on the No Fly List, which is a subset of the larger terrorist watchlist, can prevent airline travel altogether. The watchlist can separate family members for months or years, isolate individuals from friends and associates, and ruin employment prospects.

    Being branded a terrorism suspect also has far-reaching privacy implications. The watchlist is widely accessible, and government officials routinely collect the biometric data of watchlisted individuals, including their fingerprints and DNA strands. Law enforcement has likewise been directed to gather any and all available evidence when encountering watchlisted individuals, including receipts, business cards, health information and bank statements.

    Watchlisting is an awesome power, and if used, must be exercised prudently and transparently.

    The standards for inclusion should be appropriately narrow, the evidence relied upon credible and genuine, and the redress and review procedures consistent with basic constitutional requirements of fairness and due process. Instead, watchlisting is being used arbitrarily under a cloud of secrecy.

    A watchlist saturated with innocent people diverts attention from real, genuine threats. A watchlist that disproportionately targets Arab and Muslim Americans or other minorities stigmatizes innocent people and alienates them from law enforcement. A watchlist based on poor standards and secret processes raises major constitutional concerns, including the right to travel freely and not to be deprived of liberty without due process of law.

    Indeed, you can’t help but wonder: are you already on the watchlist?


    http://www.theguardian.com/commentis...nnocent-people

  4. #64
    Member Array
    Join Date
    Jan 2007
    Location
    USA
    Posts
    9,969

    Default

    Facebook suspending Native Americans over ‘fake’ names

    February 11, 2015


    Native Americans are complaining that Facebook’s “real name” policy results in many accounts being repeatedly suspended, as the company’s algorithm cannot believe names such as Lone Hill or Brown Eyes could be real.According to a report from Colorlines, users with Native American names are being locked out of their accounts, with the social networking site demanding they prove their identities to regain access.“We require people to provide the name they use in real life,” the social network says on its help page.“That way, you always know who you’re connecting with.”This policy is having a direct negative affect on Native Americans, whose rare names sometimes raise red flags. In a blog post, Dana Lone Hill said she fell victim to this policy, as her account – active since 2007 – was suspended.




    Lone Hill, a Lakota Indian, says she received a message from the social media giant saying that “it looks like the name on your Facebook account may not be your authentic name,” stating that she must submit proper IDs to prove her existence.After sending in her photo ID, library card, and one piece of mail, she received a reply from the company urging her to be patient while Facebook investigated her real identity. After almost a week, Lone Hill's account was finally restored.“I had a little bit of paranoia at first regarding issues I had been posting about until I realized I wasn’t the only Native American this happened to,” Lone Hill wrote.One of Hill’s friends had to change his Cherokee alphabet name to English, while some others “were forced to either smash the two word last names together or omit one of the two words in the last name.”



    In the case of Ogala Lakota Brown Eyes, Facebook even “changed his name to Lance Brown,” Lone Hill wrote, forcing a threat of a class-action lawsuit for the company to allow him to use his real name again.
    Some Native Americans have been granted administrative protection status to avoid recurring problems, such as Shane Creepingbear, who was “kicked off Facebook for having a fake name” on Columbus Day last year. Creepingbear, of the Kiowa Tribe of Oklahoma, says it’s a “problem when someone decides they are the arbiter of names...it can come off a tad racist.”

    To deal with the problem – which apparently started in 2009 – more than 10,000 Indians signed a petition calling on Facebook to “allow Native Americans to use their Native names on their profiles.”
    A Facebook spokesperson told Colorlines that significant improvements have been made by the company recently, “including enhancing the overall experience and expanding the options available for verifying an authentic name.”



    http://rt.com/usa/231187-facebook-su...ive-americans/


    Comments:

    If FB deems your name is not a real name, they want you to upload your photo ID with the real name or some other offical documents otherwise you will never access your account again. The policy is just another ploy to force people to put their real identities up so the real name and the real pictures can be properly stored in the Databases to sell and use with third parties and intelligent agencies.



  5. #65
    Member Array
    Join Date
    Jan 2007
    Location
    USA
    Posts
    9,969

    Default

    Facebook reveals news feed experiment to control emotions

    Protests over secret study involving 689,000 users in which friends' postings were moved to influence moods

    It already knows whether you are single or dating, the first school you went to and whether you like or loathe Justin Bieber. But now Facebook, the world's biggest social networking site, is facing a storm of protest after it revealed it had discovered how to make users feel happier or sadder with a few computer key strokes.

    It has published details of a vast experiment in which it manipulated information posted on 689,000 users' home pages and found it could make people feel more positive or negative through a process of "emotional contagion".

    In a study with academics from Cornell and the University of California, Facebook filtered users' news feeds – the flow of comments, videos, pictures and web links posted by other people in their social network. One test reduced users' exposure to their friends' "positive emotional content", resulting in fewer positive posts of their own. Another test reduced exposure to "negative emotional content" and the opposite happened.

    The study concluded: "Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks."

    Lawyers, internet activists and politicians said this weekend that the mass experiment in emotional manipulation was "scandalous", "spooky" and "disturbing".

    On Sunday evening, a senior British MP called for a parliamentary investigation into how Facebook and other social networks manipulated emotional and psychological responses of users by editing information supplied to them.

    Jim Sheridan, a member of the Commons media select committee, said the experiment was intrusive. "This is extraordinarily powerful stuff and if there is not already legislation on this, then there should be to protect people," he said. "They are manipulating material from people's personal lives and I am worried about the ability of Facebook and others to manipulate people's thoughts in politics or other areas. If people are being thought-controlled in this kind of way there needs to be protection and they at least need to know about it."

    A Facebook spokeswoman said the research, published this month in the journal of the Proceedings of the National Academy of Sciences in the US, was carried out "to improve our services and to make the content people see on Facebook as relevant and engaging as possible".

    She said: "A big part of this is understanding how people respond to different types of content, whether it's positive or negative in tone, news from friends, or information from pages they follow."

    But other commentators voiced fears that the process could be used for political purposes in the runup to elections or to encourage people to stay on the site by feeding them happy thoughts and so boosting advertising revenues.

    In a series of Twitter posts, Clay Johnson, the co-founder of Blue State Digital, the firm that built and managed Barack Obama's online campaign for the presidency in 2008, said: "The Facebook 'transmission of anger' experiment is terrifying."

    He asked: "Could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy [a website aggregating viral content] posts two weeks beforehand? Should that be legal?"

    It was claimed that Facebook may have breached ethical and legal guidelines by not informing its users they were being manipulated in the experiment, which was carried out in 2012.

    The study said altering the news feeds was "consistent with Facebook's data use policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research".

    But Susan Fiske, the Princeton academic who edited the study, said she was concerned. "People are supposed to be told they are going to be participants in research and then agree to it and have the option not to agree to it without penalty."

    James Grimmelmann, professor of law at Maryland University, said Facebook had failed to gain "informed consent" as defined by the US federal policy for the protection of human subjects, which demands explanation of the purposes of the research and the expected duration of the subject's participation, a description of any reasonably foreseeable risks and a statement that participation is voluntary. "This study is a scandal because it brought Facebook's troubling practices into a realm – academia – where we still have standards of treating people with dignity and serving the common good," he said on his blog.

    It is not new for internet firms to use algorithms to select content to show to users and Jacob Silverman, author of Terms of Service: Social Media, Surveillance, and the Price of Constant Connection, told Wire magazine on Sunday the internet was already "a vast collection of market research studies; we're the subjects".

    "What's disturbing about how Facebook went about this, though, is that they essentially manipulated the sentiments of hundreds of thousands of users without asking permission," he said. "Facebook cares most about two things: engagement and advertising. If Facebook, say, decides that filtering out negative posts helps keep people happy and clicking, there's little reason to think that they won't do just that. As long as the platform remains such an important gatekeeper – and their algorithms utterly opaque – we should be wary about the amount of power and trust we delegate to it."

    Robert Blackie, director of digital at Ogilvy One marketing agency, said the way internet companies filtered information they showed users was fundamental to their business models, which made them reluctant to be open about it.

    "To guarantee continued public acceptance they will have to discuss this more openly in the future," he said. "There will have to be either independent reviewers of what they do or government regulation. If they don't get the value exchange right then people will be reluctant to use their services, which is potentially a big business problem."


    http://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds


    Facebook apologises for psychological experiments on users
    http://www.theguardian.com/technolog...ments-on-users

    Journal that published Facebook mood study expresses 'concern' at its ethics
    http://www.theguardian.com/technolog...concern-ethics

    Facebook denies emotion contagion study had government and military ties
    http://www.theguardian.com/technolog...-military-ties

    Journal that published Facebook mood study expresses 'concern' at its ethics
    http://www.theguardian.com/technolog...concern-ethics






  6. #66
    Member Array
    Join Date
    Jan 2007
    Location
    USA
    Posts
    9,969

    Default

    Facebook Banning People For Replying To Islamophobes

    Facebook is getting completely ridiculous with their banning

    22nd September 2016

    Yep, another 30 day ban on my political facebook for posting something that does not meet "community guidelines".

    So I will let you decide if Mark FascistBerg has finally flipped his lid on this one.

    Maybe Mark FascitBerg is secretly an islamophobinc.

    I guess these "Hate Facts" and the new idea of "Hate Truth" are really upsetting the facebook censor nazis and the wimminz! LOL!

    Enjoy gentlemen!

    (Oh, and of course get yourself a manbook account and use our social network functions to avoid bans like this.)

    Yes...this earned me a 30 day ban! LOL!


    http://www.manbook.biz/wp-content/up...iculous_01.jpg
    http://www.manbook.biz/2016/09/22/fa...their-banning/

  7. #67
    Member Array
    Join Date
    Jan 2007
    Location
    USA
    Posts
    9,969

    Default

    Dislike: Facebook names Netanyahu’s former advisor ‘head of policy’

    19 juni 2016

    Binyamin Netanyahu’s longtime senior adviser Jordana Cutler has been named as Facebook’s head of policy and communication in Israel’s latest bid to tackle the BDS movement online.

    A longtime senior adviser to Israeli Prime minister Binyamin Netanyahu has been appointed as Facebook’s head of policy and communication in the latest cooperation between the social networking site and the Israeli government to tackle the BDS movement.

    Jordana Cutler, also chief of staff at the Israeli embassy in Washington, has joined Facebook’s Israel office to oversee the planning and execution of measures taken to combat BDS campaigns.

    Cutler’s new post was applauded by the minister of public security Gilad Erdan, who announced on Thursday a series of legislative measures taken by his government against promoting the boycott of Israel.

    “If we want to convince the world that de-legitimation of Israel is something wrong and that there should be consequences, we must start here in Israel,” Erdan was quoted by Israeli media as saying during a conference in Herzliya.

    “There will now be a real price to pay for someone working […] to isolate [Israel] from the rest of the world. I set up a legal team, together with the ministry of justice, that will promote governmental legislation on the matter,” Erdan said.

    “There has been an advance in dialogue between the state of Israel and Facebook,” he said, “Facebook realises that it has a responsibility to monitor its platform and remove content. I hope it will be regulated for good.”

    “We will use legitimate democratic tools to fight this battle. We will make companies shift from being on the attack against Israel to the defence of protecting themselves,” he added.

    The BDS movement, which describes itself as a global movement of citizens, advocates for non-violent campaigns of boycotts, divestment and sanctions as a means to overcome the Israeli regime of occupation, settler-colonialism and apartheid

    Upon its launch in 2005, the campaign was widely ignored and even laughed at, by Israel and its supporters around the world.

    But the Jewish state has since been troubled by the wide growing popularity of the movement, whose latest campaign Tov Ramadan raises awareness on Israeli settlement products and encourages people breaking their fast during the month of Ramadan to boycott them.

    http://khamakarpress.com/2016/06/19/...ead-of-policy/

  8. #68
    Member Array
    Join Date
    Jan 2007
    Location
    USA
    Posts
    9,969

    Default

    Why has Facebook changed their top comment algorithm so it only benefits racists, Nazis and trolls?


    8.21.17



    If you get your news from Facebook, you will almost certainly have noticed that the Social Media platform has recently introduced a new way of listing the top comments on posts.

    Previously the person with the ‘most-liked’ comment grabbed the coveted ‘top comment’ spot on popular posts.


    For those unfamiliar with Facebook, the top comment is the first one seen by anybody viewing the post, and listing them in order of the amount of likes they received was a good rough guide to gauge overall public opinion and to sense the prevailing political mood on social media.

    Almost invariably, especially on left-leaning publications, this meant that the most inspiring or positive comments would become the most liked, and therefore the most seen comment by other users.


    However, for whatever reason, Facebook have now decided that they don’t like this way of doing things. And so they have introduced a new way – a way that, whether through accident or design, only seems to be benefiting people willing to air utterly disgusting and hugely objectionable views.



    The new Facebook top comment algorithm, rather than listing the most liked comment at the top, now lists the comment with the most replies at the top of every post instead.



    By ordering top comments by the amount of replies they receive, Facebook is now actively encouraging people to post the most outrageous and deplorable comments that they think other users will feel obliged to rebuke in order to win the top comment slot.


    In the same way that attention seeking fascists like Katie Hopkins use hatred, division and incitement to gain the attention they crave, Facebook’s new algorithm is now encouraging social media users to do the same.


    The previous method was not without its faults, but Facebook’s change only serves to ensure the most odious and egregious views become the most widely seen – completely eliminating any kind of sensible debate and presenting an even bigger platform to troll accounts – accounts which Facebook publicly say they want to eliminate from their site.

    Take this post from The Independent, for example:

    http://www.independent.co.uk/news/world/americas/charlottesville-robert-e-lee-statue-heather-heyer-murdered-anti-hate-protester-neo-nazi-car-white-a7904876.html?cmpid=facebook-post

    Despite The Independent being a relatively liberal, left-leaning publication with a huge fan base of like-minded readers, the top two comments on this post are:




    As you can see, two intentionally provocative comments posted by the same troll account have been pushed to the top of the comments because other users feel they have to respond.



    However, with the new Facebook algorithm, merely commenting to register your anger at someone’s comment is not always the best course of action to deal with trolls.

    What can we do to counter the trolls?


    Taking into account the new Facebook algorithm, there are several ways of countering troll comments such as this




    • Like The Top Counter Comment


    A counter comment that receives more like than the original comment will ensure that the counter comment becomes immediately visible to other users.


    The two top comments in the above Independent post had no counter comments that exceeded the like-count of the original comment, meaning the counter comments were not displayed by default.
    Now look at this post on Jeremy Corbyn’s page:





    Because David John’s counter comment received more likes than Daniel Whittle’s means that it is automatically displayed when someone views the post. Therefore, to counter negative troll comments, the first thing you should do is to like the first counter comment you agree with.




    • Reply to the Comments You Agree With


    To ensure that positive comments are pushed higher up the top comments list, always try to reply to the ones you agree with. You don’t have to post an essay – just reply with ‘Agree’ or ‘Well said’, much like ‘bumping’ a post in a group to ensure it improves its visibility.




    • Report Troll Facebook Accounts


    Troll accounts are always very similar. They usually have no actual photos of a real human being and regularly post deplorable and obnoxious comments on things they disagree with purely for attention.


    If you suspect someone of commenting through a troll account, simply report them. Facebook has a very strict policy to ensure only people willing to show their own face and use their real name are allowed to use accounts.


    To report troll accounts, go to their profile, then:



    On Desktop: Click the three dots button, then click report:



    Then click ‘Report this profile’



    Then click ‘This is a fake account’



    Then click ‘Submit to Facebook for review’




    On Mobile, go to their profile and click the ‘More…’ button and then follow the above stages.




    If you know of any more ways to counter Facebook’s Nazi-enabling algorithm change, please let us know in the comments below.



    http://evolvepolitics.com/why-has-facebook-changed-their-top-comment-algorithm-so-it-only-benefits-racists-nazis-and-trolls/

  9. #69
    Member Array
    Join Date
    Jan 2007
    Location
    USA
    Posts
    9,969

    Default

    User's Facial Recognition from Facebook Video Chat Device Raises Fears

    Facebook is developing a video chat device that can recognise users’ faces, according to a new report.
    The box is said to be similar to the Amazon Echo Show, and will feature a camera, touchscreen and speakers.However, a person familiar with the project says consumers have told Facebook that they fear the device could be used to spy on them, Business Insider reports.

    The device has been codenamed Project Aloha, and is set to be released by Facebook in May 2018.However, it may hit the market under a new brand name.According to Business Insider, the social networking company is afraid that widespread consumer mistrust of Facebook will cripple the device. It conducted marketing studies for Project Aloha and reportedly received “overwhelming concern” that Project Aloha would help the company spy on users.It was recently reported that Facebook has been using data gathered from another company for detailed insights on people’s app and website usage habits, such as which apps they use, how frequently they use them and even how long they use them for.

    People didn’t even need to have Facebook on their phones, the report said. As well as creating a new brand name for Aloha, Facebook is also thinking up “creative ways” to market it. For instance, as a device to help old people communicate with their families and friends.It’s being developed by Facebook’s Building 8 division, which is also working on mind-reading technology that Facebook describes as a “brain-computer speech-to-text interface”.

    https://www.independent.co.uk/life-s...GllGsoAg%3D%3D


 

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •