Welcome to the Net Muslims Forums.
Page 4 of 4 FirstFirst 1234
Results 61 to 71 of 71
  1. #61
    Member Array
    Join Date
    Jan 2007


    Facebook Crosses The Line With New Facebook Messenger App


    First, this is VERY important to read and understand. I’m doing my best to look out for all the Facebook Users who aren’t as tech savvy as their kids or friends. I’m trying to help explain what’s happening because if I don’t…nobody else will!

    If you’re anything like your neighbor…you probably use Facebook on your phone WAY more than you use it on a computer. You’ve been sending messages from the Facebook app and it probably always asks you if you want to install the Facebook Messenger App.

    Its always been OPTIONAL but coming soon to your Facebook experience….it won’t be an option…it will be mandatory if you care to send messages from your phone.

    No big deal one might think…but the part that the average Facebook User doesn’t realize is the permissions you must give to Facebook in order to use the Facebook Messenger App. Here is a short list of the most disturbing permissions it requires and a quick explanation of what it means to you and your privacy.

    • Change the state of network connectivity – This means that Facebook can change or alter your connection to the Internet or cell service. You’re basically giving Facebook the ability to turn features on your phone on and off for its own reasons without telling you.
    • Call phone numbers and send SMS messages – This means that if Facebook wants to…it can send text messages to your contacts on your behalf. Do you see the trouble in this? Who is Facebook to be able to access and send messages on your phone? You’re basically giving a stranger your phone and telling them to do what they want when they want!
    • Record audio, and take pictures and videos, at any time – Read that line again….RECORD audio…TAKE pictures….AT ANY TIME!! That means that the folks at Facebook can see through your lens on your phone whenever they want..they can listen to what you’re saying via your microphone if they choose to!!
    • Read your phone’s call log, including info about incoming and outgoing calls – Who have you been calling? How long did you talk to them? Now Facebook will know all of this because you’ve downloaded the new Facebook messenger app.
    • Read your contact data, including who you call and email and how often – Another clear violation of your privacy. Now Facebook will be able to read e-mails you’ve sent and take information from them to use for their own gain. Whether it’s for “personalized advertisements” or if it’s for “research purposes” ….whatever the reason..they’re accessing your private encounters.
    • Read personal profile information stored on your device – This means that if you have addresses, personal info, pictures or anything else that’s near and dear to your personal life…they can read it.
    • Get a list of accounts known by the phone, or other apps you use – Facebook will now have a tally of all the apps you use, how often you use them and what information you keep or exchange on those apps.

    Hopefully, you take this as serious as I do…after reading more about it and studying the permissions I have now deleted the app from my phone and don’t intend to use it ever again. I still have my Facebook app but I just won’t use the messaging feature unless I’m at a computer. Even then, I might not use messaging anymore.

    With these kinds of privacy invasions I think Facebook is pushing the limits to what people will let them get away with. I remember when the Internet first began its march toward socializing dominance when AOL would send us CD’s for free trials every week. On AOL, we made screen names that somewhat hid our identities and protected us against the unseen dangers online. Now, it seems that we’ve forgotten about that desire to protect our identity and we just lay down and let them invade our privacy.

    There may be no turning back at this point because many people won’t read this or investigate the permissions of Facebook’s new mandatory app but at least I can say I tried to help us put up a fight. Pass this along to your friends and at least try to let them know what they’re getting into.


  2. #62
    Member Array
    Join Date
    Jan 2007


    Your New Facebook "Friend" may be the FBI


    The Feds are on Facebook. And MySpace, LinkedIn and Twitter, too.

    U.S. law enforcement agents are following the rest of the Internet world into popular social-networking services, going undercover with false online profiles to communicate with suspects and gather private information, according to an internal Justice Department document that offers a tantalizing glimpse of issues related to privacy and crime-fighting.

    Think you know who's behind that "friend" request? Think again. Your new "friend" just might be the FBI.

    The document, obtained in a Freedom of Information Act lawsuit, makes clear that U.S. agents are already logging on surreptitiously to exchange messages with suspects, identify a target's friends or relatives and browse private information such as postings, personal photographs and video clips.

    Among other purposes: Investigators can check suspects' alibis by comparing stories told to police with tweets sent at the same time about their whereabouts. Online photos from a suspicious spending spree — people posing with jewelry, guns or fancy cars — can link suspects or their friends to robberies or burglaries.

    The Electronic Frontier Foundation, a San Francisco-based civil liberties group, obtained the Justice Department document when it sued the agency and five others in federal court. The 33-page document underscores the importance of social networking sites to U.S. authorities. The foundation said it would publish the document on its Web site on Tuesday.

    With agents going undercover, state and local police coordinate their online activities with the Secret Service, FBI and other federal agencies in a strategy known as "deconfliction" to keep out of each other's way.

    "You could really mess up someone's investigation because you're investigating the same person and maybe doing things that are counterproductive to what another agency is doing," said Detective Frank Dannahey of the Rocky Hill, Conn., Police Department, a veteran of dozens of undercover cases.

    A decade ago, agents kept watch over AOL and MSN chat rooms to nab sexual predators. But those text-only chat services are old-school compared with today's social media, which contain mountains of personal data, photographs, videos and audio clips — a potential treasure trove of evidence for cases of violent crime, financial fraud and much more.

    The Justice Department document, part of a presentation given in August by top cybercrime officials, describes the value of Facebook, Twitter, MySpace, LinkedIn and other services to government investigators. It does not describe in detail the boundaries for using them.

    "It doesn't really discuss any mechanisms for accountability or ensuring that government agents use those tools responsibly," said Marcia Hoffman, a senior attorney with the civil liberties foundation.

    The group sued in Washington to force the government to disclose its policies for using social networking sites in investigations, data collection and surveillance.

    Covert investigations on social-networking services are legal and governed by internal rules, according to Justice Department officials. But they would not say what those rules are.

    The Justice Department document raises a legal question about a social-media bullying case in which U.S. prosecutors charged a Missouri woman with computer fraud for creating a fake MySpace account — effectively the same activity that undercover agents are doing, although for different purposes.

    The woman, Lori Drew, helped create an account for a fictitious teen boy on MySpace and sent flirtatious messages to a 13-year-old neighborhood girl in his name. The girl hanged herself in October 2006, in a St. Louis suburb, after she received a message saying the world would be better without her.

    A jury in California, where MySpace has its servers, convicted Drew of three misdemeanor counts of accessing computers without authorization because she was accused of violating MySpace's rules against creating fake accounts. But last year a judge overturned the verdicts, citing the vagueness of the law.

    "If agents violate terms of service, is that 'otherwise illegal activity'?" the document asks. It doesn't provide an answer.

    Facebook's rules, for example, specify that users "will not provide any false personal information on Facebook, or create an account for anyone other than yourself without permission." Twitter's rules prohibit its users from sending deceptive or false information. MySpace requires that information for accounts be "truthful and accurate."

    A former U.S. cybersecurity prosecutor, Marc Zwillinger, said investigators should be able to go undercover in the online world the same way they do in the real world, even if such conduct is barred by a company's rules. But there have to be limits, he said.

    In the face-to-face world, agents can't impersonate a suspect's spouse, child, parent or best friend. But online, behind the guise of a social-networking account, they can.

    "This new situation presents a need for careful oversight so that law enforcement does not use social networking to intrude on some of our most personal relationships," said Zwillinger, whose firm does legal work for Yahoo and MySpace.

    Undercover operations aren't necessary if the suspect is reckless. Federal authorities nabbed a man wanted on bank fraud charges after he started posting Facebook updates about the fun he was having in Mexico.

    Maxi Sopo, a native of Cameroon living in the Seattle area, apparently slipped across the border into Mexico in a rented car last year after learning that federal agents were investigating the alleged scheme. The agents initially could find no trace of him on social media sites, and they were unable to pin down his exact location in Mexico. But they kept checking and eventually found Sopo on Facebook.

    While Sopo's online profile was private, his list of friends was not. Assistant U.S. Attorney Michael Scoville began going through the list and was able to learn where Sopo was living. Mexican authorities arrested Sopo in September. He is awaiting extradition to the U.S.

    The Justice document describes how Facebook, MySpace and Twitter have interacted with federal investigators: Facebook is "often cooperative with emergency requests," the government said. MySpace preserves information about its users indefinitely and even stores data from deleted accounts for one year. But Twitter's lawyers tell prosecutors they need a warrant or subpoena before the company turns over customer information, the document says.

    "Will not preserve data without legal process," the document says under the heading, "Getting Info From Twitter ... the bad news."

    Twitter did not respond to a request for comment for this story.

    The chief security officer for MySpace, Hemanshu Nigam, said MySpace doesn't want to be the company that stands in the way of an investigation.

    "That said, we also want to make sure that our users' privacy is protected and any data that's disclosed is done under proper legal process," Nigam said.

    MySpace requires a search warrant for private messages less than six months old, according to the company.

    Facebook spokesman Andrew Noyes said the company has put together a handbook to help law enforcement officials understand "the proper ways to request information from Facebook to aid investigations."

    The Justice document includes sections about its own lawyers. For government attorneys taking cases to trial, social networks are a "valuable source of info on defense witnesses," they said. "Knowledge is power. ... Research all witnesses on social networking sites."

    But the government warned prosecutors to advise their own witnesses not to discuss cases on social media sites and to "think carefully about what they post."

    It also cautioned federal law enforcement officials to think prudently before adding judges or defense counsel as "friends" on these services.

    "Social networking and the courtroom can be a dangerous combination," the government said.


  3. #63
    Member Array
    Join Date
    Jan 2007


    The US government can brand you a terrorist based on a Facebook post.

    The US government’s web of surveillance is vast and interconnected. Now we know just how opaque, inefficient and discriminatory it can be.

    As we were reminded again just this week, you can be pulled into the National Security Agency’s database quietly and quickly, and the consequences can be long and enduring. Through ICREACH, a Google-style search engine created for the intelligence community, the NSA provides data on private communications to 23 government agencies. More than 1,000 analysts had access to that information.

    This kind of data sharing, however, isn’t limited to the latest from Edward Snowden’s NSA files. It was confirmed earlier this month that the FBI shares its master watchlist, the Terrorist Screening Database, with at least 22 foreign governments, countless federal agencies, state and local law enforcement, plus private contractors.

    The watchlist tracks “known” and “suspected” terrorists and includes both foreigners and Americans. It’s also based on loose standards and secret evidence, which ensnares innocent people. Indeed, the standards are so low that the US government’s guidelines specifically allow for a single, uncorroborated source of information – including a Facebook or Twitter post – to serve as the basis for placing you on its master watchlist.

    Of the 680,000 individuals on that FBI master list, roughly 40% have “no recognized terrorist group affiliation”, according to the Intercept. These individuals don’t even have a connection – as the government loosely defines it – to a designated terrorist group, but they are still branded as suspected terrorists.

    The absurdities don’t end there. Take Dearborn, Michigan, a city with a population under 100,000 that is known for its large Arab American community – and has more watchlisted residents than any other city in America except New York.

    These eye-popping numbers are largely the result of the US government’s use of a loose standard – so-called “reasonable suspicion” – in determining who, exactly, can be watchlisted.

    Reasonable suspicion is such a low standard because it requires neither “concrete evidence” nor “irrefutable evidence”. Instead, an official is permitted to consider “reasonable inferences” and “to draw from the facts in light of his/her experience”.

    Consider a real world context – actual criminal justice – where an officer needs reasonable suspicion to stop a person in the street and ask him or her a few questions. Courts have controversially held that avoiding eye contact with an officer, traveling alone, and traveling late at night, for example, all amount to reasonable suspicion.

    This vague criteria is now being used to label innocent people as terrorism suspects.

    Moreover, because the watchlist isn’t limited to known, actual terrorists, an official can watchlist a person if he has reasonable suspicion to believe that the person is a suspected terrorist. It’s a circular logic – individuals can be watchlisted if they are suspected of being suspected terrorists – that is ultimately backwards, and must be changed.

    The government’s self-mandated surveillance guidance also includes loopholes that permit watchlisting without even showing reasonable suspicion. For example, non-citizens can be watchlisted for being associated with a watchlisted person – even if their relationship with that person is entirely innocuous. Another catch-all exception allows non-citizens to be watchlisted, so long as a source or tipster describes the person as an “extremist”, a “militant”, or in similar terms, and the “context suggests a nexus to terrorism”. The FBI’s definition of “nexus”, in turn, is far more nebulous than they’re letting on.

    Because the watchlist designation process is secret, there’s no way of knowing just how many innocent people are added to the list due to these absurdities and loopholes. And yet, history shows that innocent people are inevitably added to the list and suffer life-altering consequences. Life on the master watchlist can trigger enhanced screening at borders and airports; being on the No Fly List, which is a subset of the larger terrorist watchlist, can prevent airline travel altogether. The watchlist can separate family members for months or years, isolate individuals from friends and associates, and ruin employment prospects.

    Being branded a terrorism suspect also has far-reaching privacy implications. The watchlist is widely accessible, and government officials routinely collect the biometric data of watchlisted individuals, including their fingerprints and DNA strands. Law enforcement has likewise been directed to gather any and all available evidence when encountering watchlisted individuals, including receipts, business cards, health information and bank statements.

    Watchlisting is an awesome power, and if used, must be exercised prudently and transparently.

    The standards for inclusion should be appropriately narrow, the evidence relied upon credible and genuine, and the redress and review procedures consistent with basic constitutional requirements of fairness and due process. Instead, watchlisting is being used arbitrarily under a cloud of secrecy.

    A watchlist saturated with innocent people diverts attention from real, genuine threats. A watchlist that disproportionately targets Arab and Muslim Americans or other minorities stigmatizes innocent people and alienates them from law enforcement. A watchlist based on poor standards and secret processes raises major constitutional concerns, including the right to travel freely and not to be deprived of liberty without due process of law.

    Indeed, you can’t help but wonder: are you already on the watchlist?


  4. #64
    Member Array
    Join Date
    Jan 2007


    Facebook suspending Native Americans over ‘fake’ names

    February 11, 2015

    Native Americans are complaining that Facebook’s “real name” policy results in many accounts being repeatedly suspended, as the company’s algorithm cannot believe names such as Lone Hill or Brown Eyes could be real.According to a report from Colorlines, users with Native American names are being locked out of their accounts, with the social networking site demanding they prove their identities to regain access.“We require people to provide the name they use in real life,” the social network says on its help page.“That way, you always know who you’re connecting with.”This policy is having a direct negative affect on Native Americans, whose rare names sometimes raise red flags. In a blog post, Dana Lone Hill said she fell victim to this policy, as her account – active since 2007 – was suspended.

    Lone Hill, a Lakota Indian, says she received a message from the social media giant saying that “it looks like the name on your Facebook account may not be your authentic name,” stating that she must submit proper IDs to prove her existence.After sending in her photo ID, library card, and one piece of mail, she received a reply from the company urging her to be patient while Facebook investigated her real identity. After almost a week, Lone Hill's account was finally restored.“I had a little bit of paranoia at first regarding issues I had been posting about until I realized I wasn’t the only Native American this happened to,” Lone Hill wrote.One of Hill’s friends had to change his Cherokee alphabet name to English, while some others “were forced to either smash the two word last names together or omit one of the two words in the last name.”

    In the case of Ogala Lakota Brown Eyes, Facebook even “changed his name to Lance Brown,” Lone Hill wrote, forcing a threat of a class-action lawsuit for the company to allow him to use his real name again.
    Some Native Americans have been granted administrative protection status to avoid recurring problems, such as Shane Creepingbear, who was “kicked off Facebook for having a fake name” on Columbus Day last year. Creepingbear, of the Kiowa Tribe of Oklahoma, says it’s a “problem when someone decides they are the arbiter of names...it can come off a tad racist.”

    To deal with the problem – which apparently started in 2009 – more than 10,000 Indians signed a petition calling on Facebook to “allow Native Americans to use their Native names on their profiles.”
    A Facebook spokesperson told Colorlines that significant improvements have been made by the company recently, “including enhancing the overall experience and expanding the options available for verifying an authentic name.”



    If FB deems your name is not a real name, they want you to upload your photo ID with the real name or some other offical documents otherwise you will never access your account again. The policy is just another ploy to force people to put their real identities up so the real name and the real pictures can be properly stored in the Databases to sell and use with third parties and intelligent agencies.

  5. #65
    Member Array
    Join Date
    Jan 2007


    Facebook reveals news feed experiment to control emotions

    Protests over secret study involving 689,000 users in which friends' postings were moved to influence moods

    It already knows whether you are single or dating, the first school you went to and whether you like or loathe Justin Bieber. But now Facebook, the world's biggest social networking site, is facing a storm of protest after it revealed it had discovered how to make users feel happier or sadder with a few computer key strokes.

    It has published details of a vast experiment in which it manipulated information posted on 689,000 users' home pages and found it could make people feel more positive or negative through a process of "emotional contagion".

    In a study with academics from Cornell and the University of California, Facebook filtered users' news feeds – the flow of comments, videos, pictures and web links posted by other people in their social network. One test reduced users' exposure to their friends' "positive emotional content", resulting in fewer positive posts of their own. Another test reduced exposure to "negative emotional content" and the opposite happened.

    The study concluded: "Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks."

    Lawyers, internet activists and politicians said this weekend that the mass experiment in emotional manipulation was "scandalous", "spooky" and "disturbing".

    On Sunday evening, a senior British MP called for a parliamentary investigation into how Facebook and other social networks manipulated emotional and psychological responses of users by editing information supplied to them.

    Jim Sheridan, a member of the Commons media select committee, said the experiment was intrusive. "This is extraordinarily powerful stuff and if there is not already legislation on this, then there should be to protect people," he said. "They are manipulating material from people's personal lives and I am worried about the ability of Facebook and others to manipulate people's thoughts in politics or other areas. If people are being thought-controlled in this kind of way there needs to be protection and they at least need to know about it."

    A Facebook spokeswoman said the research, published this month in the journal of the Proceedings of the National Academy of Sciences in the US, was carried out "to improve our services and to make the content people see on Facebook as relevant and engaging as possible".

    She said: "A big part of this is understanding how people respond to different types of content, whether it's positive or negative in tone, news from friends, or information from pages they follow."

    But other commentators voiced fears that the process could be used for political purposes in the runup to elections or to encourage people to stay on the site by feeding them happy thoughts and so boosting advertising revenues.

    In a series of Twitter posts, Clay Johnson, the co-founder of Blue State Digital, the firm that built and managed Barack Obama's online campaign for the presidency in 2008, said: "The Facebook 'transmission of anger' experiment is terrifying."

    He asked: "Could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy [a website aggregating viral content] posts two weeks beforehand? Should that be legal?"

    It was claimed that Facebook may have breached ethical and legal guidelines by not informing its users they were being manipulated in the experiment, which was carried out in 2012.

    The study said altering the news feeds was "consistent with Facebook's data use policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research".

    But Susan Fiske, the Princeton academic who edited the study, said she was concerned. "People are supposed to be told they are going to be participants in research and then agree to it and have the option not to agree to it without penalty."

    James Grimmelmann, professor of law at Maryland University, said Facebook had failed to gain "informed consent" as defined by the US federal policy for the protection of human subjects, which demands explanation of the purposes of the research and the expected duration of the subject's participation, a description of any reasonably foreseeable risks and a statement that participation is voluntary. "This study is a scandal because it brought Facebook's troubling practices into a realm – academia – where we still have standards of treating people with dignity and serving the common good," he said on his blog.

    It is not new for internet firms to use algorithms to select content to show to users and Jacob Silverman, author of Terms of Service: Social Media, Surveillance, and the Price of Constant Connection, told Wire magazine on Sunday the internet was already "a vast collection of market research studies; we're the subjects".

    "What's disturbing about how Facebook went about this, though, is that they essentially manipulated the sentiments of hundreds of thousands of users without asking permission," he said. "Facebook cares most about two things: engagement and advertising. If Facebook, say, decides that filtering out negative posts helps keep people happy and clicking, there's little reason to think that they won't do just that. As long as the platform remains such an important gatekeeper – and their algorithms utterly opaque – we should be wary about the amount of power and trust we delegate to it."

    Robert Blackie, director of digital at Ogilvy One marketing agency, said the way internet companies filtered information they showed users was fundamental to their business models, which made them reluctant to be open about it.

    "To guarantee continued public acceptance they will have to discuss this more openly in the future," he said. "There will have to be either independent reviewers of what they do or government regulation. If they don't get the value exchange right then people will be reluctant to use their services, which is potentially a big business problem."


    Facebook apologises for psychological experiments on users

    Journal that published Facebook mood study expresses 'concern' at its ethics

    Facebook denies emotion contagion study had government and military ties

    Journal that published Facebook mood study expresses 'concern' at its ethics

  6. #66
    Member Array
    Join Date
    Jan 2007


    Facebook Banning People For Replying To Islamophobes

    Facebook is getting completely ridiculous with their banning

    22nd September 2016

    Yep, another 30 day ban on my political facebook for posting something that does not meet "community guidelines".

    So I will let you decide if Mark FascistBerg has finally flipped his lid on this one.

    Maybe Mark FascitBerg is secretly an islamophobinc.

    I guess these "Hate Facts" and the new idea of "Hate Truth" are really upsetting the facebook censor nazis and the wimminz! LOL!

    Enjoy gentlemen!

    (Oh, and of course get yourself a manbook account and use our social network functions to avoid bans like this.)

    Yes...this earned me a 30 day ban! LOL!


  7. #67
    Member Array
    Join Date
    Jan 2007


    Dislike: Facebook names Netanyahu’s former advisor ‘head of policy’

    19 juni 2016

    Binyamin Netanyahu’s longtime senior adviser Jordana Cutler has been named as Facebook’s head of policy and communication in Israel’s latest bid to tackle the BDS movement online.

    A longtime senior adviser to Israeli Prime minister Binyamin Netanyahu has been appointed as Facebook’s head of policy and communication in the latest cooperation between the social networking site and the Israeli government to tackle the BDS movement.

    Jordana Cutler, also chief of staff at the Israeli embassy in Washington, has joined Facebook’s Israel office to oversee the planning and execution of measures taken to combat BDS campaigns.

    Cutler’s new post was applauded by the minister of public security Gilad Erdan, who announced on Thursday a series of legislative measures taken by his government against promoting the boycott of Israel.

    “If we want to convince the world that de-legitimation of Israel is something wrong and that there should be consequences, we must start here in Israel,” Erdan was quoted by Israeli media as saying during a conference in Herzliya.

    “There will now be a real price to pay for someone working […] to isolate [Israel] from the rest of the world. I set up a legal team, together with the ministry of justice, that will promote governmental legislation on the matter,” Erdan said.

    “There has been an advance in dialogue between the state of Israel and Facebook,” he said, “Facebook realises that it has a responsibility to monitor its platform and remove content. I hope it will be regulated for good.”

    “We will use legitimate democratic tools to fight this battle. We will make companies shift from being on the attack against Israel to the defence of protecting themselves,” he added.

    The BDS movement, which describes itself as a global movement of citizens, advocates for non-violent campaigns of boycotts, divestment and sanctions as a means to overcome the Israeli regime of occupation, settler-colonialism and apartheid

    Upon its launch in 2005, the campaign was widely ignored and even laughed at, by Israel and its supporters around the world.

    But the Jewish state has since been troubled by the wide growing popularity of the movement, whose latest campaign Tov Ramadan raises awareness on Israeli settlement products and encourages people breaking their fast during the month of Ramadan to boycott them.


  8. #68
    Member Array
    Join Date
    Jan 2007


    User's Facial Recognition from Facebook Video Chat Device Raises Fears

    Facebook is developing a video chat device that can recognise users’ faces, according to a new report.
    The box is said to be similar to the Amazon Echo Show, and will feature a camera, touchscreen and speakers.However, a person familiar with the project says consumers have told Facebook that they fear the device could be used to spy on them, Business Insider reports.

    The device has been codenamed Project Aloha, and is set to be released by Facebook in May 2018.However, it may hit the market under a new brand name.According to Business Insider, the social networking company is afraid that widespread consumer mistrust of Facebook will cripple the device. It conducted marketing studies for Project Aloha and reportedly received “overwhelming concern” that Project Aloha would help the company spy on users.It was recently reported that Facebook has been using data gathered from another company for detailed insights on people’s app and website usage habits, such as which apps they use, how frequently they use them and even how long they use them for.

    People didn’t even need to have Facebook on their phones, the report said. As well as creating a new brand name for Aloha, Facebook is also thinking up “creative ways” to market it. For instance, as a device to help old people communicate with their families and friends.It’s being developed by Facebook’s Building 8 division, which is also working on mind-reading technology that Facebook describes as a “brain-computer speech-to-text interface”.


  9. #69
    Member Array
    Join Date
    Jan 2007


    Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments

    video: https://www.facebook.com/JewishVoice...56959047319992

    In September of last year, we noted that Facebook representatives were meeting with the Israeli government to determine which Facebook accounts of Palestinians should be deleted on the ground that they constituted “incitement.” The meetings — called for and presided over by one of the most extremist and authoritarian Israeli officials, pro-settlement Justice Minister Ayelet Shaked — came after Israel threatened Facebook that its failure to voluntarily comply with Israeli deletion orders would result in the enactment of laws requiring Facebook to do so, upon pain of being severely fined or even blocked in the country.

    The predictable results of those meetings are now clear and well-documented. Ever since, Facebook has been on a censorship rampage against Palestinian activists who protest the decades-long, illegal Israeli occupation, all directed and determined by Israeli officials. Indeed, Israeli officials have been publicly boasting about how obedient Facebook is when it comes to Israeli censorship orders:

    Shortly after news broke earlier this month of the agreement between the Israeli government and Facebook, Israeli Justice Minister Ayelet Shaked said Tel Aviv had submitted 158 requests to the social media giant over the previous four months asking it to remove content it deemed “incitement.” She said Facebook had granted 95 percent of the requests.

    She’s right. The submission to Israeli dictates is hard to overstate: As the New York Times put it in December of last year, “Israeli security agencies monitor Facebook and send the company posts they consider incitement. Facebook has responded by removing most of them.”

    What makes this censorship particularly consequential is that “96 percent of Palestinians said their primary use of Facebook was for following news.” That means that Israeli officials have virtually unfettered control over a key communications forum of Palestinians.

    In the weeks following those Facebook-Israel meetings, reported The Independent, “the activist collective Palestinian Information Center reported that at least 10 of their administrators’ accounts for their Arabic and English Facebook pages — followed by more than 2 million people — have been suspended, seven of them permanently, which they say is a result of new measures put in place in the wake of Facebook’s meeting with Israel.” Last March, Facebook briefly shut down the Facebook page of the political party, Fatah, followed by millions, “because of an old photo posted of former leader Yasser Arafat holding a rifle.”

    A 2016 report from the Palestinian Center for Development and Media Freedoms detailed how extensive the Facebook censorship was:

    Pages and personal accounts that were filtered and blocked: Palestinian Dialogue Network (PALDF.net) Gaza now, Jerusalem News Network, Shihab agency, Radio Bethlehem 2000, Orient Radio Network, page Mesh Heck, Ramallah news, journalist Huzaifa Jamous from Abu Dis, activist Qassam Bedier, activist Mohammed Ghannam, journalist Kamel Jbeil, administrative accounts for Al Quds Page, administrative accounts Shihab agency, activist Abdel-Qader al-Titi, youth activist Hussein Shajaeih, Ramah Mubarak (account is activated), Ahmed Abdel Aal (account is activated), Mohammad Za’anin (still deleted), Amer Abu Arafa (still deleted), Abdulrahman al-Kahlout (still deleted).

    Needless to say, Israelis have virtually free rein to post whatever they want about Palestinians.
    Calls by Israelis for the killing of Palestinians are commonplace on Facebook, and largely remain undisturbed.

    As Al Jazeera reported last year, “Inflammatory speech posted in the Hebrew language … has attracted much less attention from the Israeli authorities and Facebook.” One study found that “122,000 users directly called for violence with words like ‘murder,’ ‘kill,’ or ‘burn.’ Arabs were the No. 1 recipients of hateful comments.” Yet there appears to be little effort by Facebook to censor any of that.

    Though some of the most inflammatory and explicit calls for murder are sometimes removed, Facebook continues to allow the most extremist calls for incitement against Palestinians to flourish. Indeed, Israel’s leader, Benjamin Netanyahu, has often used social media to post what is clearly incitement to violence against Palestinians generally. In contrast to Facebook’s active suppression against Palestinians, the very idea that Facebook would ever use its censorship power against Netanyahu or other prominent Israelis calling for violence and inciting attacks is unthinkable. Indeed, as Al Jazeera concisely put it, “Facebook hasn’t met Palestinian leaders to discuss their concern.”

    Facebook now seems to be explicitly admitting that it also intends to follow the censorship orders of the U.S. government. Earlier this week, the company deleted the Facebook and Instagram accounts of Ramzan Kadyrov, the repressive, brutal, and authoritarian leader of the Chechen Republic, who had a combined 4 million followers on those accounts. To put it mildly, Kadyrov — who is given free rein to rule the province in exchange for ultimate loyalty to Moscow — is the opposite of a sympathetic figure: He has been credibly accused of a wide range of horrific human rights violations, from the imprisonment and torture of LGBTs to the kidnapping and killing of dissidents.

    But none of that dilutes how disturbing and dangerous Facebook’s rationale for its deletion of his accounts is. A Facebook spokesperson told the New York Times that the company deleted these accounts not because Kadyrov is a mass murderer and tyrant, but that “Mr. Kadyrov’s accounts were deactivated because he had just been added to a United States sanctions list and that the company was legally obligated to act.”

    As the Times notes, this rationale appears dubious or at least inconsistently applied: Others who are on the same sanctions list, such as Venezuelan President Nicolas Maduro, remain active on both Facebook and Instagram. But just consider the incredibly menacing implications of Facebook’s claims.

    What this means is obvious: that the U.S. government — meaning, at the moment, the Trump administration has the unilateral and unchecked power to force the removal of anyone it wants from Facebook and Instagram by simply including them on a sanctions list. Does anyone think this is a good outcome? Does anyone trust the Trump administration — or any other government — to compel social media platforms to delete and block anyone it wants to be silenced? As the ACLU’s Jennifer Granick told the Times:

    It’s not a law that appears to be written or designed to deal with the special situations where it’s lawful or appropriate to repress speech. … This sanctions law is being used to suppress speech with little consideration of the free expression values and the special risks of blocking speech, as opposed to blocking commerce or funds as the sanctions was designed to do. That’s really problematic.

    Does Facebook’s policy of blocking people from its platform who are sanctioned apply to all governments? Obviously not. It goes without saying that if, say, Iran decided to impose sanctions on Chuck Schumer for his support of Trump’s policy of recognizing Jerusalem as the Israeli capital, Facebook would never delete the accounts of the Democratic Party Senate minority leader — just as Facebook would never delete the accounts of Israeli officials who incite violence against Palestinians or who are sanctioned by Palestinian officials. Just last month, Russia announced retaliatory sanctions against various Canadian officials and executives, but needless to say, Facebook took no action to censor them or block their accounts.

    Similarly, would Facebook ever dare censor American politicians or journalists who use social media to call for violence against America’s enemies? To ask the question is to answer it.

    As is always true of censorship, there is one, and only one, principle driving all of this: power. Facebook will submit to and obey the censorship demands of governments and officials who actually wield power over it, while ignoring those who do not. That’s why declared enemies of the U.S. and Israeli governments are vulnerable to censorship measures by Facebook, whereas U.S and Israeli officials (and their most tyrannical and repressive allies) are not:

    All of this illustrates that the same severe dangers from state censorship are raised at least as much by the pleas for Silicon Valley giants to more actively censor “bad speech.” Calls for state censorship may often be well-intentioned — a desire to protect marginalized groups from damaging “hate speech” — yet, predictably, they are far more often used against marginalized groups: to censor them rather than protect them. One need merely look at how hate speech laws are used in Europe, or on U.S. college campuses, to see that the censorship victims are often critics of European wars, or activists against Israeli occupation, or advocates for minority rights.

    One can create a fantasy world in one’s head, if one wishes, in which Silicon Valley executives use their power to protect marginalized peoples around the world by censoring those who wish to harm them. But in the real world, that is nothing but a sad pipe dream. Just as governments will, these companies will use their censorship power to serve, not to undermine, the world’s most powerful factions.

    Just as one might cheer the censorship of someone one dislikes without contemplating the long-term consequences of the principle being validated, one can cheer the disappearance from Facebook and Instagram of a Chechen monster. But Facebook is explicitly telling you that the reason for its actions is that it was obeying the decrees of the U.S. government about who must be shunned.

    It’s hard to believe that anyone’s ideal view of the internet entails vesting power in the U.S. government, the Israeli government, and other world powers to decide who may be heard on it and who must be suppressed. But increasingly, in the name of pleading with internet companies to protect us, that’s exactly what is happening.


    Former Facebook executive: social media is ripping society apart

    Chamath Palihapitiya, former vice-president of user growth, expressed regret forhis part in building tools that destroy ‘the social fabric of how society works’

    A former Facebook executive has said he feels “tremendous guilt” over his work on “tools that are ripping apart the social fabric of how society works”, joining a growing chorus of critics of the social media giant.

    Chamath Palihapitiya, who was vice-president for user growth at Facebook before he left the company in 2011, said: “The short-term, dopamine-driven feedback loops that we have created are destroying how society works. No civil discourse, no cooperation, misinformation, mistruth.”

    The remarks, which were made at a Stanford Business School event in November, were just surfaced by tech website the Verge on Monday.

    “This is not about Russian ads,” he added. “This is a global problem. It is eroding the core foundations of how people behave by and between each other.”

    Ex-Facebook president Sean Parker: site made to exploit human 'vulnerability'

    Palihapitiya’s comments last month were made a day after Facebook’s founding president, Sean Parker, criticized the way that thecompany “exploit[s] a vulnerability in human psychology” by creating a “social-validation feedback loop” during an interview at an Axios event.

    Parker had said that he was “something of a conscientious objector” to using social media, a stance echoed by Palihapitiya who said that he was now hoping to use the money he made at Facebook to do good in the world.

    “I can’t control them,” Palihapitiya said of his former employer. “I can control my decision, which is that I don’t use that ****. I can control my kids’ decisions, which is that they’re not allowed to use that ****.”

    He also called on his audience to “soul-search” about their own relationship to social media. “Your behaviors, you don’t realize it, but you are being programmed,” he said. “It was unintentional, but now you gotta decide how much you’re going to give up, how much of your intellectual independence.”

    Social media companies have faced increased scrutiny over the past year as critics increasingly link growing political divisions across the globe to the handful of platforms that dominate online discourse.

    Many observers attributed the unexpected outcomes of the 2016 US presidential election and Brexit referendum at least in part to the ideological echo chambers created by Facebook’s algorithms, as well as the proliferation of fake news, conspiracy mongering, and propaganda alongside legitimate news sources in Facebook’s news feeds.

    The company only recently acknowledged that it sold advertisements to Russian operatives seeking to sow division among US voters during the 2016 election.

    Facebook has also faced significant criticism for its role in amplifying anti-Rohingya propaganda in Myanmar amid suspected ethnic cleansing of the Muslim minority.

    Palihapitiya referenced a case from the Indian state of Jharkhand this spring, when false WhatsApp messages warning of a group of kidnappers led to the lynching of seven people. WhatsApp is owned by Facebook.

    “That’s what we’re dealing with,” Palihapitiya said. “Imagine when you take that to the extreme where bad actors can now manipulate large swaths of people to do anything you want. It’s just a really, really bad state of affairs.”

    Facebook responded to Palihapitiya’s comments on Tuesday, noting that the former executive had not worked for the company in six years.

    “When Chamath was at Facebook we were focused on building new social media experiences and growing Facebook around the world,” a company spokeswoman, Susan Glick, said in a statement. “Facebook was a very different company back then, and as we have grown, we have realized how our responsibilities have grown too. We take our role very seriously and we are working hard to improve.”

    The company said that it was researching the impact of its products on “well-being” and noted that the CEO, Mark Zuckerberg, indicated a willingness to decrease profitability to address issues such as foreign interference in elections.


  10. #70
    Member Array
    Join Date
    Jan 2007


    Designers are using “dark UX” to turn you into a sleep-deprived internet addict

    Last week, Facebook rolled out an update to the design of its News Feed. The design tweaks—near imperceptible pixel spacing and color enhancements—were meant to make Facebook’s infinitely updating cascade of stories even easier to consume and comment on. “Small changes, like a few extra pixels of padding or the tint of a button, can have large and unexpected repercussions,” wrote Facebook design leads Shali Nguyen and Ryan Freitas in an Aug 15 Medium post about the update. One of the objectives for the update was to find a way to make Facebook even more “engaging and immersive.”

    Of course, the goal of any good design is to keep users engaged. But in the case of Facebook and other social media, “engagement” can easily become into “can’t-stop-scrolling,” or even addiction. The result is an often overlooked ethical conundrum to designers creating digital experiences: When is design too engaging for users’ own good?

    Internet addiction is associated with poor health and obesity, social isolation and even brain damage, and the US National Institutes of Health classifies internet addiction as a social disorder that causes “neurological complications, psychological disturbances, and social problems.” A 2014 study published in the Cyberpsychology, Behavior and Social Networking journal suggests that 6% of the world’s total population has a problem with internet use. Most of us can feel degrees of the same problem when we try to detach from our mobile phones during dinner or fail to ignore work emails on holiday.

    Your brain on News Feed

    Design is a crucial element in making websites and apps more addictive, explain professors Ofir Turel and Antoine Bechara, who co-authored a 2016 neurological study comparing Facebook addiction to cocaine addiction. As the first interface seen by Facebook’s 2 billion users, every tweak to the News Feed has far-reaching effects.

    Facebook’s News Feed acts like “slot machine for the brain,” say Turel and Bachara. Every time we refresh the site, it generates different rewards that encourages us to keep on using it. In an email to Quartz, Turel and Bachara write:

    This is like the idea that most of us like cakes. When we open a refrigerator door multiple times and see the same cake, we will not be as motivated to eat like if we opened the refrigerator door multiple times, and each time see a different cake (i.e., be exposed to a variable reward)….The objective of introducing better “reward management” abilities was to ensure users spend more time on Facebook as the rewards they are exposed to after the new abilities were implemented are presumed to be larger than before.

    The work is done through user experience or UX design, a specialization that considers the totality of a user’s experience while they’re using a piece of technology. Coined by Don Norman, Apple’s “user experience architect” in the early 1990s, UX design is typically used in describing the functionality of websites and apps today.

    By improving the News Feed’s design, Facebook’s UX designers are eliminating all obstacles to cue users to curtail their time on the platform. “When the interaction is smooth and requires no thought or difficult-to-remember steps, the behavior will more likely and easily become automated and rewarding,” Turel and Bacharal explain to Quartz. “[Facebook’s] ultimate objective with these moves is to retain users’ attention and keep them scrolling, posting and using their sites.”

    Designer or master manipulator?

    Designing to encourage addictive behavior is a studied skill. Legions of designers are now learning the psychology of persuasion and use these tactics to make sites and apps “stickier.” One of these schools is the Stanford Persuasive Tech Lab. Spearheaded by behavior scientist BJ Fogg, the lab teaches students about the tenets of “captology,” the study of computers as persuasive technologies.

    An alumnus of the program and former Google employee, Tristan Harris described how designers learn persuasive techniques grimly: “There are conferences and workshops that teach people all these covert ways of getting people’s attention and orchestrating people’s lives.” Harris explained at TED last April. “I want you to imagine walking into a room, a control room with a bunch of people, a hundred people, hunched over a desk with little dials, and that that control room will shape the thoughts and feelings of a billion people. This might sound like science fiction, but this actually exists right now, today,” he said, describing the typical scenario in Silicon Valley product design departments.

    Fogg clarified to Quartz that ethics has always been part of his Lab’s curricula, supplying several videos and published papers on the subject. Captology’s goal, he says, is to extend the user’s will and improve their well-being.

    “We believe that much like human persuaders, persuasive technologies can bring about positive changes in many domains, including health, business, safety, and education,” claims the program’s website. Fogg’s lab, for instance, has helped the US Centers for Disease Control increase the efficacy of their health campaigns in Africa through SMS. Captology lab students learn how to read user data and learn how to use it to sway their choices. Their curriculum includes classes on psychology, persuasion theory and several classes how to create the most engaging content for Facebook.

    But can designers really be trusted to decide what’s good for us? Should we trust designers and programmers with so much power without regulations?

    Inside the world of “dark UX”

    UX design’s founding principle is utopian. According to Norman, “the first requirement for an exemplary user experience is to meet the exact needs of the customer, without fuss or bother.” Addictive, well-designed interfaces mean that UX designers are doing their jobs. And micro visual cues like a bigger “Buy Now” button, or flashy testimonials, can be just as much value-neutral tools of the trade as they are tactics in the battle for your attention.

    Deloitte Digital’s UX competency lead Emily Ryan describes the tricky line designers walk every day: “At the end of the day, it’s extremely tough to say ‘no’ to a product manager who’s telling you to add some feature that we know the user doesn’t want, doesn’t need and ultimately will mar their experience in some form.” Ryan, whose personal website is aptly named “UX is everywhere,” says that conscientious designers will propose alternatives to dirty, manipulative design tactics called “dark UX“.

    Dark UX is an industry term for sly design tricks that benefit the client’s bottom line. It ranges from creating defaults, such as a pre-checked opt-in email subscription or pre-selecting the most expensive options. It can also manifest in the form of interfaces requiring clients to supply their personal information before being allowed to look at the products on a website.

    Ryan explains that there are different levels of UX hell. “‘Dark UX’ is on a scale. There is “less bad” (simply bad experience) and “more bad” (detrimental to a user’s overall safety & security) and most designers would likely be somewhat ok with being on the line on the less bad side than the more bad side,” she says.

    But ultimately, the customer is always right. “I wish we had more ability to push clients towards better design practices but from a realistic standpoint, I’m just not sure it’s going to do much. The client can always find someone willing to do what they want, regardless of whether it’s ethical or not,” reflects Ryan, who been studying alternatives to UX design trickery.”The irony is that these tactics rarely work and often backfire,” she explains.

    As foot soldiers to a corporate or client mandate, designers rarely perceive or openly discuss the ethical dilemma in their work. While legal and and heath industries have a professional ethical code, designers do not. Stephen P. Anderson who penned the the book Seductive Interaction Design, describes the scenario aptly.

    If you hire a lawyer to defend you, you expect that person to do everything in his power to prove your innocence or ensure you get a fair trial. If you hire a personal trainer to help you shed some pounds, you expect that person to use the tools and methods at her disposal to help you reach your goals. Similarly, if someone hires you to create a new homepage that leads to more sales, they expect you to use whatever skills you have to accomplish this goal. Do you hold back?

    The few attempts to establish a do-no-harm moral code or “Hippocratic Oath for Designers,” have turned out to be gimmicky, resulting in cute, stylized persona mantras that confuse ethics with aesthetics. Others fizzled out from lack of collective resolution to implement a mandatory code of conduct.

    For any ethical resolution to stick, designers first have to believe their work can directly cause good or harm.


  11. #71
    Member Array
    Join Date
    Jan 2007


    Twitter Has Literally No Explanation for Why Trump’s Anti-Muslim Retweets Are OK

    On Wednesday morning, without any commentary whatsoever, Donald Trump retweeted three videos originally uploaded by Jayda Fransen of Britain First, an ultranationalist, far-right hate group known for its fiercely anti-Muslim rhetoric. Twitter, which at least pretends to have rules against blatant incitement against racial and religious groups, at first gave Trump a pass on the grounds that his retweets “ensure people have an opportunity to see every side of an issue” and represent “a legitimate public interest.” Today, the company is taking that defense back, and replacing it with nothing.

    The three tweets are par for the course from any hate group — contextless clips with titles like “Muslim Destroys a Statue of Virgin Mary!” and “Muslim migrant beats up Dutch boy on crutches!” that are increasingly used to stoke ethnic fear in countries with immigrant populations like the U.K.’s. One would be hard pressed to come up with any other takeaway from the retweets beyond the suggestion that Muslims are dangerous and to be feared, content that clearly runs afoul of Twitter’s “hateful conduct” policy:

    At first, Twitter told The Intercept that Trump’s promotion of content that would normally violate the “hateful conduct” rule was exempted because “we believe there is a legitimate public interest in its availability.” Today, that was retracted, in a series of tweets that did nothing but confuse everyone who read them:

    Yet the “media policy” does nothing to address why content that violates Twitter’s terms of service would be permitted for any reason. Twitter CEO Jack Dorsey’s addition to the thread only provided further muddling and boilerplate:

    It’s fair to say at this point that with the (bizarre, inadequate) rationale from earlier this week retracted, and no actual rationale (flawed or otherwise) offered in its place, Twitter simply does not have a reason for allowing Donald Trump to openly incite hate — and potentially violence, according to the Department of State — contrary to the service’s stated policies.

    Repeated requests for clarification sent to Dorsey were not returned, and a Twitter spokesperson declined to explain anything at all on the record, adding only, “We don’t have anything to add to our Tweets today and Jack’s follow-on Tweet after, but thanks for checking.” The spokesperson also requested that I cease directing questions about Twitter to Dorsey, the company’s CEO. A call to the Twitter spokesperson’s work number was answered with a voicemail message asking that reporters not leave voicemails.



Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts