Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
31.4K views | +2 today
Follow
Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
This curated collection includes updates, resources, and research with critical perspectives related to the intersections of educational psychology and emerging technologies in education. The page also serves as a research tool to organize online content (funnel shaped icon allows keyword search). For more on the intersections of privatization and technologization of education with critiques of the social impact finance and related technologies, please visit http://bit.ly/sibgamble and http://bit.ly/chart_look. For posts regarding screen time risks to health and development, see http://bit.ly/screen_time and for updates related to AI and data concerns, please visit http://bit.ly/DataJusticeLinks.   [Note: Views presented on this page are re-shared from external websites.  The content may not necessarily represent the views nor official position of the curator nor employer of the curator.
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD
Scoop.it!

Cyber black market selling hacked ATO and MyGov logins shows Medibank and Optus only tip of iceberg // ABC News

Cyber black market selling hacked ATO and MyGov logins shows Medibank and Optus only tip of iceberg // ABC News | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Sean Rubinsztein-DunlopEcho HuiSarah Curnow and Kevin Nguyen

 

"The highly sensitive information of millions of Australians — including logins for personal Australian Tax Office accounts, medical and personal data of thousands of NDIS recipients, and confidential details of an alleged assault of a Victorian school student by their teacher — is among terabytes of hacked data being openly traded online.

 

An ABC investigation has identified large swathes of previously unreported confidential material that is widely available on the internet, ranging from sensitive legal contracts to the login details of individual MyGov accounts, which are being sold for as little as $1 USD.

 

The huge volume of newly identified information confirms the high-profile hacks of Medibank and Optus represent just a fraction of the confidential Australian records recently stolen by cyber criminals.

 

At least 12 million Australians have had their data exposed by hackers in recent months.

It can also be revealed many of those impacted learnt they were victims of data theft only after being contacted by the ABC.

 

They said they were either not adequately notified by the organisations responsible for securing their data, or were misled as to the gravity of the breach.

 

The highly sensitive information of millions of Australians — including logins for personal Australian Tax Office accounts, medical and personal data of thousands of NDIS recipients, and confidential details of an alleged assault of a Victorian school student by their teacher — is among terabytes...

 

One of the main hubs where stolen data is published is a forum easily discoverable through Google, which only appeared eight months ago and has soared in popularity — much to the alarm of global cyber intelligence experts.

 

Anonymous users on the forum and similar websites regularly hawk stolen databases collectively containing millions of Australians' personal information.

 

Others were seen offering generous incentives to those daring enough to go after specific targets, such as one post seeking classified intelligence on the development of Australian submarines.

 

"There's a criminal's cornucopia of information available on the clear web, which is the web that's indexed by Google, as well as in the dark web," said CyberCX director of cyber intelligence Katherine Mansted.

 

"There's a very low barrier of entry for criminals … and often what we see with foreign government espionage or cyber programs — they're not above buying tools or buying information from criminals either."

 

In one case, law student Zac's medical information, pilfered in one of Australia's most troubling cyber breaches, was freely published by someone without a clear motive.

 

Zac has a rare neuromuscular disorder which has left him unable to walk and prone to severe weakness and fatigue. The ABC has agreed not to use his full name because he fears the stolen information could be used to locate him.

 

His sensitive personal data was stolen in May in a cyber attack on CTARS, a company that provides a cloud-based client management system to National Disability Insurance Scheme (NDIS) and NSW out-of-home-care service providers.

 

The National Disability Insurance Agency (NDIA), which is responsible for the NDIS, told a Senate committee it had confirmed with CTARS that all 9,800 affected participants had been notified. 

 

But ABC Investigations has established this is not the case. The ABC spoke with 20 victims of the breach, all but one — who later found a notice in her junk mail — said they had not received a notification or even heard of the hack.

 

The leaked CTARS database, verified by the ABC, included Medicare numbers, medical information, tax file numbers, prescription records, mental health diagnoses, welfare checks, and observations about high-risk behaviour such as eating disorders, self-harm and suicide attempts.

 

"It's really, really violating," said Zac, whose leaked data included severe allergy listings for common food and medicine,

"I may not like to think of myself as vulnerable … but I guess I am quite vulnerable, particularly living alone.

"Allergy records, things that are really sensitive, [are kept] private between me and my doctor and no one else but the people who support me.

 

"That's not the sort of information that you want getting into the wrong hands, particularly when ... you don't have a lot of people around you to advocate for you."

 

The CTARS database is just one of many thousands being traded on the ever-growing cybercrime black market. These postings appear on both the clear web — used everyday through common web browsers — and on the dark web which requires special software for access.

 

The scale of the problem is illustrated by the low prices being demanded for confidential data.

ABC Investigations found users selling personal information and log-in credentials to individual Australian accounts which included MyGov, the ATO and Virgin Money for between $1 to $10 USD.

 

MyGov and ATO services are built with two-factor authentication, which protects accounts with compromised usernames and passwords, but those same login details could be used as a means to bypass less-secure services.

 

One cyber intelligence expert showed the ABC a popular hackers forum, in which remote access to an Australian manufacturing company was auctioned for up to $500. He declined to identify the company.

CyberCX's Ms Mansted said the "black economy" in stolen data and hacking services was by some measures the third largest economy in the world, surpassed only by the US and Chinese GDP.

"The cost of buying a person's personal information or buying access to hack into a corporation, that's actually declining over time, because there is so much information and so much data out there," said Ms Mansted. 

 

Cyber threat investigator Paul Nevin monitors online forums where hundreds of Australians' login data are traded each week.

"The volume of them was staggering to me," said Mr Nevin, whose company Cybermerc runs surveillance on malicious actors and trains Australian defence officials.

 

"In the past, we'd see small scatterings of accounts but now, this whole marketplace has been commoditised and fully automated.

 

"The development of that capability has only been around for a few years but it shows you just how successful these actors are at what they do."

Explosive details leaked about private school


The cyber attack on Medibank last month by Russian criminal group REvil brought home the devastation cyber crime can inflict.

The largest health insurer in the country is now facing a possible class action lawsuit after REvil accessed the data of 9.7 million current and former customers, and published highly sensitive medical information online.

 

On the dark web, Russian and Eastern European criminal organisations host sites where they post ransom threats and later leak databases if the ransom is not paid.

 

The groups research their targets to inflict maximum damage. Victims range from global corporations, including defence firm Thales and consulting company Accenture, to Australian schools. 

 

In Melbourne, the Kilvington Grammar School community is reeling after more than 1,000 current and former students had their personal data leaked in October by a prolific ransomware gang, Lockbit 3.0. 

 

The independent school informed parents via emails, including one on November 2 that stated an "unknown third party has published a limited amount of data taken from our systems". 

Correspondence sent to parents indicated this "sensitive information" included contact details of parents, Medicare details and health information such as allergies, as well as some credit card information.

However, the cache of information actually published by Lockbit 3.0 was far more extensive than initially suggested.

ABC Investigations can reveal the ransomware group published highly confidential documents containing the bank account numbers of parents, legal and debt disputes between the school and families, report cards, and individual test results.

Most shocking was the publication of details concerning the investigation into a teacher accused of assaulting a child and privileged legal advice about the death of a student.

 

Kilvington Grammar has been at the centre of a coronial inquest into Lachlan Cook, 16, who died after suffering complications of Type 1 diabetes during a school trip to Vietnam in 2019.

 

Lachlan became critically ill and started vomiting, which was mistaken for gastroenteritis rather than a rare complication of his diabetes.

 

The coroner has indicated she will find the death was preventable because neither the school nor the tour operator, World Challenge, provided specific care for the teenager's diabetes. 

 

Lachlan's parents declined to comment, but ABC Investigations understands they did not receive notification from the school that sensitive legal documents about his death were stolen and published online.

 

Other parents whose details were compromised told the ABC they were frustrated by the school's failure to explain the scale of the breach.

 

"That's distressing that this type of data has been accessed," said father of two, Paul Papadopoulos.

 

"It's absolutely more sensitive [than parents were told] and I think any person would want to have known about it." 

 

In a statement to the ABC, Kilvington Grammar did not address specific questions about the Cook family tragedy nor if any ransom was demanded or paid.

 

The school's marketing director Camilla Fiorini acknowledged its attempt to notify families of the specifics of what personal data was stolen was an "imperfect process". 

 

"We have adopted a conservative approach and contacted all families that may have been impacted," she said.

 

"We listed — to the best of our abilities —  what data had been accessed ... we also suggested additional steps those individuals can consider taking to further protect their information.

 

"The school is deeply distressed by this incident and the impact it has had on our community." 

 

Other Australian organisations recently targeted by Lockbit 3.0 included a law firm, a wealth management firm for high-net-worth individuals, and a major hospitality company.

Blame game leaves victims out in the cold

The failure of Kilvington Grammar to properly notify the victims of the data-theft is not an isolated case and its targeting by a ransomware group is emblematic of a growing apparatus commoditising stolen personal information.

 

Australian Federal Police (AFP) Cybercrime Operations Commander Chris Goldsmid, told the ABC  personal data was becoming "increasingly valuable to cybercriminals who see it as information they can exploit for financial gain".

 

"Cybercriminals can now operate at all levels of technical ability and the tools they employ are easily accessible online," he warned.

 

He added the number of cybercrime incidents has risen 13 per cent from the previous financial year, to 67,500 reports — likely a conservative figure.

 

"We suspect there are many more victims but they are too embarrassed to come forward, or they have not realised what has happened to them is a crime,"

 

Commander Goldsmid said.

While authorities and the Federal Government have warned Medibank customers to be on high-alert for identity thieves, many other Australians are unaware they are victims.

 

Under the Privacy Act, all government agencies, organisations that hold health information and companies with an annual turnover above $3 million are required to notify individuals when their data has been breached if it is deemed "likely to cause serious harm".

 

After CTARS was hacked in May, the company published a statement about the hack on its website but devolved its responsibility to inform its NDIS recipients to 67 individual service providers affected by the breach.

When ABC Investigations asked CTARS why many of the impacted NDIS recipients were not notified, it said it decided the processes was best handled by each provider.

"The OAIC [Office of the Australian Information Commissioner] suggests that notifications are usually best received from the organisation who has a relationship with impacted individuals — in this case, the service providers," a CTARS spokesperson said.

 

"CTARS worked extensively to support the service providers in being able to ... bring the notification to their clients' attention."

 

However, the NDIA told the ABC this responsibility lay not with those individual providers, but with CTARS.

 

"The Agency's engagement with CTARS following the breach, indicated that CTARS was fulfilling all its obligations under the Privacy Act in relation to the breach," an NDIA spokesperson said.

"The Agency has reinforced with CTARS its obligation to inform users of their services."

 

This has provided little comfort to Zac and other CTARS victims whose personal information may never be erased from the internet.

 

"It's infuriating, it's shocking and it's disturbing," said Zac.

 

"It makes me really angry to know that multiple government agencies and these private support companies, who I would have thought would be duty bound to hold my best interests at heart … especially when my safety is at risk … that they at no level attempted to get in contact with me and assist me in protecting my information."

 

Zac's former service provider, Southern Cross Support Services, did not respond to the ABC's questions.

 

A victim of another hack published on the same forum as the CTARS data is Karen Heath.

 

The Victorian woman has been the victim of two hacks in the past month, one of Optus' customer data and another of confidential information stored by MyDeal, which is owned by retail giant Woolworths Group. 

 

Woolworths told the ABC it has "enhanced" its security and privacy practices operations since the MyDeal hack and it "unreservedly apologise[d] for the considerable concern the MyDeal breach has caused". 

 

But Ms Heath remains anxious.

"You feel a bit helpless [and] you get worried about it," Ms Heath said.

 

"I don't even know that I'll shop at Woolworths again ... they own MyDeal. They have insurance companies, they have all sorts of things.

 

"So where does it end?"

 

For original post, please visit: 

https://amp.abc.net.au/article/101700974 

 
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Policy Statement of the Federal Trade Commission on Education Technology // FTC

https://www.ftc.gov/system/files/ftc_gov/pdf/Policy%20Statement%20of%20the%20Federal%20Trade%20Commission%20on%20Education%20Technology.pdf 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Instagram fined €405M for violating kids’ privacy // Politico

Instagram fined €405M for violating kids’ privacy // Politico | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it
The fine is the third for a Meta-owned company handed down by the Irish regulator.

 

https://www.politico.eu/article/instagram-fined-e405m-for-violating-kids-privacy/? 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Digital Game-Based Learning: Foundations, Applications, and Critical Issues // Earl Aguilera and Roberto de Roock, 2022 // Education 

Digital Game-Based Learning: Foundations, Applications, and Critical Issues // Earl Aguilera and Roberto de Roock, 2022 // Education  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Earl Aguilera and Roberto de Roock

https://doi.org/10.1093/acrefore/9780190264093.013.1438

 

Summary
"As contemporary societies continue to integrate digital technologies into varying aspects of everyday life—including work, schooling, and play—the concept of digital game-based learning (DGBL) has become increasingly influential. The term DGBL is often used to characterize the relationship of computer-based games (including games played on dedicated gaming consoles and mobile devices) to various learning processes or outcomes. The concept of DGBL has its origins in interdisciplinary research across the computational and social sciences, as well as the humanities. As interest in computer games and learning within the field of education began to expand in the late 20th century, DGBL became somewhat of a contested term. Even foundational concepts such as the definition of games (as well as their relationship to simulations and similar artifacts), the affordances of digital modalities, and the question of what “counts” as learning continue to spark debate among positivist, interpretivist, and critical framings of DGBL. Other contested areas include the ways that DGBL should be assessed, the role of motivation in DGBL, and the specific frameworks that should inform the design of games for learning.

Scholarship representing a more positivist view of DGBL typically explores the potential of digital games as motivators and influencers of human behavior, leading to the development of concepts such as gamification and other uses of games for achieving specified outcomes, such as increasing academic measures of performance, or as a form of behavioral modification. Other researchers have taken a more interpretive view of DGBL, framing it as a way to understand learning, meaning-making, and play as social practices embedded within broader contexts, both local and historical. Still others approach DGBL through a more critical paradigm, interrogating issues of power, agency, and ideology within and across applications of DGBL. Within classrooms and formal settings, educators have adopted four broad approaches to applying DGBL: (a) integrating commercial games into classroom learning; (b) developing games expressly for the purpose of teaching educational content; (c) involving students in the creation of digital games as a vehicle for learning; and (d) integrating elements such as scoreboards, feedback loops, and reward systems derived from digital games into non-game contexts—also referred to as gamification.

Scholarship on DGBL focusing on informal settings has alternatively highlighted the socially situated, interpretive practices of gamers; the role of affinity spaces and participatory cultures; and the intersection of gaming practices with the lifeworlds of game players.As DGBL has continued to demonstrate influence on a variety of fields, it has also attracted criticism. Among these critiques are the question of the relative effectiveness of DGBL for achieving educational outcomes. Critiques of the quality and design of educational games have also been raised by educators, designers, and gamers alike. Interpretive scholars have tended to question the primacy of institutionally defined approaches to DGBL, highlighting instead the importance of understanding how people make meaning through and with games beyond formal schooling. Critical scholars have also identified issues in the ethics of DGBL in general and gamification in particular as a form of behavior modification and social control. These critiques often intersect and overlap with criticism of video games in general, including issues of commercialism, antisocial behaviors, misogyny, addiction, and the promotion of violence. Despite these criticisms, research and applications of DGBL continue to expand within and beyond the field of education, and evolving technologies, social practices, and cultural developments continue to open new avenues of exploration in the area."

 

To access original article, please visit:
https://doi.org/10.1093/acrefore/9780190264093.013.1438

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Student privacy laws remain the same, but children are now the product // Joel Schwarz and Emily Cherkin, The Hill 

Student privacy laws remain the same, but children are now the product // Joel Schwarz and Emily Cherkin, The Hill  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Joel Schwarz and Emily Cherkin

The Federal Trade Commission (FTC) recently issued a policy statement about the application of the Children’s Online Privacy Protection Act (COPPA) to Ed Tech providers, warning that they can only use student personally identifiable information (PII) collected with school consent for the benefit of the school, and that they cannot retain it for longer than required to meet the purpose of collection.

Ironically, days later, a Human Rights Watch investigative report observed that almost 90 percent of Ed Tech products it reviewed “appeared to engage in data practices that put children’s rights at risk.”

These revelations are no surprise to children’s privacy advocacy groups like the Student Data Privacy Project. But in the midst of a COVID-fog, much like the fog of war, Ed Tech remained largely insulated from scrutiny, siphoning student PII with impunity.

 

Taking a step back, it’s important to understand how Ed Tech providers access and collect this information. In 1974, the Family Educational Rights and Privacy Act (FERPA) was passed to protect school-held PII, such as that found in student directories. But FERPA contains a “School Official Exception” that allows schools to disclose children’s PII without parental consent so long as it’s disclosed for a “legitimate educational interest” and the school maintains “direct control” over the provider.  

In 1974, it was easy to maintain direct control over entities because there was no internet.

Today, schools increasingly rely on Ed Tech platforms to provide digital learning, pursuant to an electronically signed agreement, hosted by a nameless/faceless server, somewhere in the ether. Yet the law has barely changed since 1974. For example, the Department of Education (DOE) maintains that direct control can be established through use of a contract between the parties, despite the fact that online contracts and Terms of Service are often take-it-or-leave-it propositions that favor online services. In law, we called these “contracts of adhesion.” In Ed Tech advocacy, we call them data free-for-alls.

 

Given these concerns, in 2021 the Student Data Privacy Project (SDPP) helped parents from North Carolina to Alaska file access requests with their children’s schools under a FERPA provision mandating that schools provide parents access to their children’s PII. Most parents received nothing. Many schools seemed unable to get their Ed Tech providers to respond, and other schools didn’t know how to make the request of the provider.

One Minnesota parent received over 2,000 files, revealing a disturbing amount of personal information held by EdTech. How might this data be used to profile this child? And how does this comport with the FTC’s warning about retaining information only for as long as needed to fulfill the purpose of collection?

Despite this isolated example, most parents failed to receive a comprehensive response. As such, SDPP worked with parents to file complaints with the DOE in July 2021. As the one-year anniversary of these complaints draws near, however, the DOE has taken no substantive action. 

 

Ironically, in cases where the DOE sent copies of the parent’s complaint to the affected school district, the school’s response only bolstered concerns. One Alaska school district misapplied a Supreme Court case dealing with FERPA, asserting that “data gathered by technology vendors is not ‘educational records’ under FERPA” because the Ed Tech records are not “centrally stored” by the school. Ironically, that school attached its FERPA addendum to that same letter, which explicitly states that it “includes all data specifically protected by FERPA, including student education records, in any form.”

Unfortunately, this is indicative of widespread confusion by schools about applying FERPA to Ed Tech.

Yet parents have few options for holding Ed Tech providers accountable. Parents can’t sue Ed Tech because the schools have the direct contractual relationship. Parents can’t directly enforce FERPA because FERPA doesn’t offer a private right of action. Even state privacy laws are of little help when consent for sharing is given — and FERPA allows schools to consent on parents’ behalf.

 

There is some cause for hope. For example, President Biden’s March 1 State of the Union speech challenged Congress to strengthen children’s privacy protections “by banning online platforms from excessive data collection and targeted advertising for children.” And in January, Rep. Tom Emmer (R-Minn.) sent DOE a letter inquiring about the SDPP parent complaints. Most recently, we have the FTC’s warning to Ed Tech about protecting student data privacy. Beyond that, however, we’ve seen little progress, or action, by the government.

So here are three things that need to happen to hold Ed Tech accountable:

  1. The FTC needs to enforce COPPA obligations on Ed Tech providers.

  2. The DOE must enforce FERPA, compelling schools to hold Ed Tech vendors accountable.

  3. Congress must update FERPA for the realities of the 21st century.

A 50th Anniversary is always a big occasion in a relationship, warranting a grand gesture to renew the commitment.

 

So what better gesture for the 50th anniversary of FERPA in 2024 than for the government to renew its commitment to protecting the privacy of nearly 50 million students by enforcing the law and closing the gaps that have allowed Ed Tech providers to exploit children’s PII for their own profit, without oversight or accountability?"

 

To view original post, please visit: 

https://thehill.com/opinion/cybersecurity/3586011-student-privacy-laws-remain-the-same-but-children-are-now-the-product/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Ransomware Attacks Against Higher Ed Increase // Inside Higher Ed

Ransomware Attacks Against Higher Ed Increase // Inside Higher Ed | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"Colleges and universities experienced a surge in ransomware attacks in 2021, and those attacks had significant operational and financial costs, according to a new report."

 

By Susan D'Agostino

“You can collect that money in a couple of hours,” a ransomware hacker’s representative wrote in a secure June 2020 chat with a University of California, San Francisco, negotiator about the $3 million ransom demanded. “You need to take us seriously. If we’ll release on our blog student records/data, I’m 100% sure you will lose more than our price what we ask.”

The university later paid $1.14 million to gain access to the decryption key.

Colleges and universities worldwide experienced a surge in ransomware attacks in 2021, and those attacks had significant operational and financial costs, according to a new report from Sophos, a global cybersecurity leader. The survey included 5,600 IT professionals, including 410 from higher education, across 31 countries. Though most of the education victims succeeded in retrieving some of their data, few retrieved all of it, even after paying the ransom.

 

“The nature of the academic community is very collegial and collaborative,” said Richard Forno, assistant director of the University of Maryland Baltimore County Center for Cybersecurity. “There’s a very fine line that universities and colleges have to walk between facilitating academic research and education and maintaining strong security.”

That propensity of colleges to share openly and widely can make the institutions susceptible to attacks.

Nearly three-quarters (74 percent) of ransomware attacks on higher ed institutions succeeded. Hackers’ efforts in other sectors were not as fruitful, including in business, health care and financial services, where respectively 68 percent, 61 percent and 57 percent of attacks succeeded. For this reason, cybercriminals may view colleges and universities as soft targets for ransomware attacks, given their above-average success rate in encrypting higher education institutions’ data.

Despite high-profile ransomware attacks such as one in 2020 that targeted UC San Francisco, higher ed institutions’ efforts to protect their networks continued to fall short in 2021."...

 

For original post, please visit:

https://www.insidehighered.com/news/2022/07/22/ransomware-attacks-against-higher-ed-increase 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

'Hey Siri': Virtual assistants are listening to children and then using the data // The Conversation 

'Hey Siri': Virtual assistants are listening to children and then using the data // The Conversation  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Published: July 14, 2022 9.43am EDT 

By Stephen J. Neville and Natalie Coulter, York University, Canada
"In many busy households around the world, it’s not uncommon for children to shout out directives to Apple’s Siri or Amazon’s Alexa. They may make a game out of asking the voice-activated personal assistant (VAPA) what time it is, or requesting a popular song. While this may seem like a mundane part of domestic life, there is much more going on.

 

The VAPAs are continuously listening, recording and processing acoustic happenings in a process that has been dubbed “eavesmining,” a portmanteau of eavesdropping and datamining. This raises significant concerns pertaining to issues of privacy and surveillance, as well as discrimination, as the sonic traces of peoples’ lives become datafied and scrutinized by algorithms.

These concerns intensify as we apply them to children. Their data is accumulated over lifetimes in ways that go well beyond what was ever collected on their parents with far-reaching consequences that we haven’t even begun to understand.

Always listening

The adoption of VAPAs is proceeding at a staggering pace as it extends to include mobile phones, smart speakers and the ever-increasing number products that are connected to the internet. These include children’s digital toyshome security systems that listen for break-ins and smart doorbells that can pickup sidewalk conversations.

There are pressing issues that derive from the collection, storage and analysis of sonic data as they pertain to parents, youth and children. Alarms have been raised in the past — in 2014, privacy advocates raised concerns on how much the Amazon Echo was listening to, what data was being collected and how the data would be used by Amazon’s recommendation engines.

And yet, despite these concerns, VAPAs and other eavesmining systems have spread exponentially. Recent market research predicts that by 2024, the number of voice-activated devices will explode to over 8.4 billion.

 

Recording more than just speech

There is more being gathered than just uttered statements, as VAPAs and other eavesmining systems overhear personal features of voices that involuntarily reveal biometric and behavioural attributes such as age, gender, health, intoxication and personality.

Information about acoustic environments (like a noisy apartment) or particular sonic events (like breaking glass) can also be gleaned through “auditory scene analysis” to make judgments about what is happening in that environment.

Eavesmining systems already have a recent track record for collaborating with law enforcement agencies and being subpoenaed for data in criminal investigations. This raises concerns of other forms of surveillance creep and profiling of children and families.

For example, smart speaker data may be used to create profiles such as “noisy households,” “disciplinary parenting styles” or “troubled youth.” This could, in the future, be used by governments to profile those reliant on social assistance or families in crisis with potentially dire consequences.

There are also new eavesmining systems presented as a solution to keep children safe called “aggression detectors.” These technologies consist of microphone systems loaded with machine learning software, dubiously claiming that they can help anticipate incidents of violence by listening for signs of raising volume and emotions in voices, and for other sounds such as glass breaking.

Monitoring schools

Aggression detectors are advertised in school safety magazines and at law enforcement conventions. They have been deployed in public spaces, hospitals and high schools under the guise of being able to pre-empt and detect mass shootings and other cases of lethal violence.

But there are serious issues around the efficacy and reliability of these systems. One brand of detector repeatedly misinterpreted vocal cues of kids including coughing, screaming and cheering as indicators of aggression. This begs the question of who is being protected and who will be made less safe by its design.

 

Some children and youth will be disproportionately harmed by this form of securitized listening, and the interests of all families will not be uniformly protected or served. A recurrent critique of voice-activated technology is that it reproduces cultural and racial biases by enforcing vocal norms and misrecognizing culturally diverse forms of speech in relation to language, accent, dialect and slang.


We can anticipate that the speech and voices of racialized children and youth will be disproportionately misinterpreted as aggressive sounding. This troubling prediction should come as no surprise as it follows the deeply entrenched colonial and white supremacist histories that consistently police a “sonic color line.”

Sound policy

Eavesmining is a rich site of information and surveillance as children and families’ sonic activities have become valuable sources of data to be collected, monitored, stored, analysed and sold without the subject’s knowledge to thousands of third parties. These companies are profit-driven, with few ethical obligations to children and their data.

With no legal requirement to erase this data, the data accumulates over children’s lifetimes, potentially lasting forever. It is unknown how long and how far-reaching these digital traces will follow children as they age, how widespread this data will be shared or how much this data will be cross-referenced with other data. These questions have serious implications on children’s lives both presently and as they age.

There are a myriad threats posed by eavesmining in terms of privacy, surveillance and discrimination. Individualized recommendations, such as informational privacy education and digital literacy training, will be ineffective in addressing these problems and place too great a responsibility on families to develop the necessary literacies to counter eavesmining in public and private spaces.

We need to consider the advancement of a collective framework that combats the unique risks and realities of eavesmining. Perhaps the development of a Fair Listening Practice Principles — an auditory spin on the “Fair Information Practice Principles” — would help evaluate the platforms and processes that impact the sonic lives of children and families."... 

 

For full post, please visit:

 https://theconversation.com/amp/hey-siri-virtual-assistants-are-listening-to-children-and-then-using-the-data-186874 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

‘Digital child labour’ – Pediatrician slams use of children on social media // NewsTalk 

‘Digital child labour’ – Pediatrician slams use of children on social media // NewsTalk  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"We need much stricter controls on the brands and influencers that share photos of children on social media, according to a pediatric consultant."

By Michael Staines
"In a recent column for The Irish Examiner, Dr Niamh Lynch said we need to re-think how we use images of children on social media – calling for an end to what she called ‘digital child labour.’


She said children’s rights to privacy and safety were being breached without their consent, and often for financial gain.

On The Pat Kenny Show this morning, she said the article was in response to the rise in ‘sharenting’ and ‘mumfluencers.’ 

 

“Without picking one example - and that wouldn’t actually be fair because I think a bit of responsibility has to be taken by the social media companies themselves and by the companies that use these parents - but certainly there would be tales of children being clearly unhappy or tired or not in the mood and yet it has become their job to promote a product or endorse a product or whatever,” she said.

“These children are doing work and because they’re young, they can’t actually consent to that. Their privacy can sometimes be violated and there is a whole ethical minefield around it.”

'Digital child labour'

She said Ireland needs tighter legislation to protect children’s rights and privacy – and to ensure there is total transparency about the money changing hands.

“People don’t realise that these children are working,” she said.

“These children are doing a job.

“It is a job that can at times compromise their safety. It is a job that compromises their privacy and it is certainly a job they are doing without any sort of consent.

“It is very different say with a child in an ad for a shopping centre or something like that. Where you see the face of the child, but you know nothing about them.

“These children, you know everything about them really in many cases.

“So yes, I would say there needs to be tighter legislation around it. It needs to be clear because very often it is presented within the sort of cushion of family life and the segue between what is family life and what is an ad isn’t always very clear.

“There needs to be more transparency really about transactions that go on in the background.”

Privacy

She said there is a major issue around child safety when so much person l information is being shared.

“The primary concern would be the safety of the child because once a child becomes recognisable separate to the parent then there’s the potential for them to become a bit of a target,” she said.

“When you think about how much is shared about these children online, it is pretty easy to know who their siblings are, what their date of birth is, when they lost their last tooth, what their pet’s name is.

“There is a so much information out there about certain children and there are huge safety concerns around that then as well.”

Legislation

Dr Lynch said we won’t know the impact of many children for at least another decade; however, children that featured in early YouTube videos are already coming out and talking about what an “uncomfortable experience” it was for them.

“I think the parents themselves to a degree perhaps are also being exploited by large companies who are using them to use their child to promote products,” she said.

“So, I think large companies certainly need to take responsibility and perhaps we should call those companies out when we see that online.”

“The social media companies really should tighten up as well.”

 

For audio interview and full post, please visit: 

 
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Illuminate Education Breach Included Los Angeles Unified & Riverside County Districts, Pushing Total Impacted to Over 3M // THE Journal

Illuminate Education Breach Included Los Angeles Unified & Riverside County Districts, Pushing Total Impacted to Over 3M // THE Journal | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

California's Largest District & Riverside County Add Nearly 1 Million To the Number of Students Whose Private Data Was Stolen From Illuminate

By Kristal Kuykendall 

"The breach of student data that occurred during a January 2022 cyberattack targeting Illuminate Education’s systems is now known to have impacted the nation’s second-largest school district, Los Angeles Unified with 430,000 students, which has notified state officials along with 24 other districts in California and one in Washington state.

The data breach notifications posted on the California Attorney General’s website in the past week by LAUSD, Ceres Unified School District with 14,000 students, and Riverside County Office of Education representing 23 districts and 431,000 students, mean that Illuminate Education’s data breach leaked the private information of well over 3 million students — and potentially several times that total.

The vast reach of the data breach will likely never be fully known because most state laws do not require public disclosure of data breaches; Illuminate has said in a statement that the data of current and former students was compromised at the impacted schools but declined to specify the total number of students impacted in multiple email communications with THE Journal.

 

The estimated total of 3 million is based on New York State Department of Education official estimates that “at least 2 million” statewide were impacted, plus the current enrollment figures of the other districts that have since disclosed their student data was also breached by Illuminate.

California requires a notice of a data breach to be posted on the attorney general’s website, but the notices do not include any details such as what data was stolen, nor the number of students affected; the same is true in Washington, where Impact Public Schools in South Puget Sound notified the state attorney general this week that its students were among those impacted by the Illuminate incident.

Oklahoma City Public Schools on May 13 added its 34,000 students to the ever-growing list of those impacted by the Illuminate Education data breach; thus far, it is the only district in Oklahoma known to have been among the hundreds of K–12 schools and districts across the country whose private student data was compromised while stored within Illuminate’s systems. Oklahoma has no statewide public disclosure requirements, so it’s left up to local districts to decide whether and how to notify parents in the event of a breach of student data, Oklahoma Department of Education officials told THE Journal recently.

In Colorado, where nine districts have publicly disclosed that the Illuminate breach included the data of their combined 140,000 students, there is no legal mandate for school districts nor ed tech vendors to notify state education officials when student data is breached, Colorado Department of Education Director of Communications Jeremy Meyer told THE Journal. State law does not require student data to be encrypted, he said, and CDE has no authority to collect data on nor investigate data breaches. Colorado’s Student Data Transparency and Security Act, passed in 2016, goes no further than “strongly urging” local districts to stop using ed tech vendors who leak or otherwise compromise student data.

Most of the notifications shared by districts included in the breach have simply shared a template letter, or portions of it, signed by Illuminate Education. It states that Social Security numbers were not part of the private information that was stolen during the cyberattack.

Notification letters shared by impacted districts have stated that the compromised data included student names, academic and behavioral records, enrollment data, disability accommodation information, special education status, demographic data, and in some cases the students’ reduced-price or free lunch status.

Illuminate has told THE Journal that the breach was discovered after it began investigating suspicious access to its systems in early January. The incident resulted in a week-long outage of all Illuminate’s K–12 school solutions, including IO Classroom (previously named Skedula), PupilPath, EduClimber, IO Education, SchoolCity, and others, according to its service status site. The company’s website states that its software products serve over 5,000 schools nationally with a total enrollment of about 17 million U.S. students.

Hard-Hit New York Responds with Investigation of Illuminate

The New York State Education Department on May 5 told THE Journal that 567 schools in the state — including “at least” 1 million current and former students — were among those impacted by the Illuminate data breach, and NYSED data privacy officials opened an investigation on April 1.

The list of all New York schools impacted by the data breach was sent to THE Journal in response to a Freedom of Information request; NYSED officials said the list came from Illuminate. Each impacted district was working to confirm how many current and former students were among those whose data were compromised, and each is required by law to report those totals to NYSED, so the total number of students affected was expected to grow, the department said."

 

For original publication, please visit: 

https://thejournal.com/articles/2022/05/27/illuminate-breach-included-los-angeles-riverside-county-pushing-total-impacted-well-over-2-million.aspx?m=1  

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

College Students Say Crying In Exams Activates "Cheating" Eye Tracker Software // Futurism

College Students Say Crying In Exams Activates "Cheating" Eye Tracker Software // Futurism | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Lonnie Lee Hood

"Colleges and universities are increasingly using digital tools to prevent cheating during online exams, since so many students are taking class from home or their dorm rooms in the era of COVID-19.

The programs — prominent software options include Pearson VUE and Honorlock — analyze imagery from students' webcams to detect behavior that might be linked to cheating.

Needless to say, there are pain points.

University of Kentucky professor Josef Fruehwald, for instance, said in a popular video on TikTok that he wouldn't trust educators who use the software, prompting 2.3 million views and dozens of comments from stressed out students.

 

"One of my French exams got flagged for cheating because I was crying for the whole thing and my French prof had to watch 45 min of me quietly sobbing," one user replied.

"Since COVID, LSAT uses a proctoring system," another said. "I was yelled at for having a framed quote from my grandmother on the wall."

No less harrowing, one student said a proctor asked them to change into "something more conservative" during the exam, in the student's own home.

Fruehwald got so many responses he made a Twitter thread about it — whereupon tweeps started sharing even more allegations.

 

"My husband has two classes left for his BFA and one of them is a math class that requires an assessment test before enrolling," wrote one person. "He should have graduated two years ago but he couldn't take the friggin math class because THE SOUND OF HIS LAPTOP'S FAN SET OFF THE PROCTOR SOFTWARE."

Representatives of the anti-cheating software market did push back.

"Honorlock uses facial detection and ensures certain facial landmarks are present in the webcam during the assessment," said Honorlock's chief marketing officer Tess Mitchell, after this story was initially published. "Honorlock records the student’s webcam, so crying is visible, however, crying does not trigger a flag or proctor intervention."

Eye tracking software isn't exactly knocking it out of the park in the public opinion lately. One startup is forcing people to watch ads with their eyelids all the way open, and another is offering crypto in exchange for eyeball time.

 

The pandemic has changed a lot about the way society runs, and education seems to be a particularly challenged sector. As teachers quit jobs and students say they're silently sobbing into eye tracking programs on a computer screen, it's not hard to see why.

Updated with additional context and a statement from Honorlock.

 

For original post, please visit:

https://futurism.com/college-students-exam-software-cheating-eye-tracking-covid?taid=62713f29ee8b820001167731 

 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

EdTech Tools Coming Under FTC Scrutiny Over Children’s Privacy // BloombergLaw

EdTech Tools Coming Under FTC Scrutiny Over Children’s Privacy // BloombergLaw | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

The Federal Trade Commission is planning to scrutinize educational technology in its enforcement of children’s online privacy rules.

 

By Andrea Vittorio
"The Federal Trade Commission is planning to scrutinize educational technology in its enforcement of children’s online privacy rules.

 

The commission is slated to vote at a May 19 meeting on a policy statement related to how the Children’s Online Privacy Protection Act applies to edtech tools, according to an agenda issued Thursday.

 

The law, known as COPPA, gives parents control over what information online platforms can collect about their kids. Parents concerned about data that digital learning tools collect from children have called for stronger oversight of technology increasingly used in schools.

The FTC’s policy statement “makes clear that parents and schools must not be required to sign up for surveillance as a condition of access to tools needed to learn,” the meeting notice said.

It’s the first agency meeting since George University law professor Alvaro Bedoya was confirmed as a member of the five-seat commission, giving Chair Lina Khan a Democratic majority needed to pursue policy goals. Bedoya has said he wants to strengthen protections for children’s digital data.

Companies that violate COPPA can face fines from the FTC. Past enforcement actions under the law have been brought against companies including TikTok and Google’s YouTube.

Alphabet Inc.‘s Google has come under legal scrutiny for collecting data on users of its educational tools and relying on schools to give consent for data collection on parents’ behalf.

New Mexico’s attorney general recently settled a lawsuit against Google that alleged COPPA violations. Since the suit was filed in 2020, Google has launched new features to protect children’s data.


To contact the reporter on this story: Andrea Vittorio in Washington at avittorio@bloomberglaw.com. To contact the editors responsible for this story: Jay-Anne B. Casuga at jcasuga@bloomberglaw.com; Tonia Moore at tmoore@bloombergindustry.com 

 

 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Face up to it – this surveillance of kids in school is creepy // Stephanie Hare // The Guardian

Face up to it – this surveillance of kids in school is creepy // Stephanie Hare // The Guardian | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"Facial recognition technology doesn’t just allow children to make cashless payments – it can gauge their mood and behaviour in class"

 

 

By Stephanie Hare

"A few days ago, a friend sent me a screenshot of an online survey sent by his children’s school and a company called ParentPay, which provides technology for cashless payments in schools. “To help speed up school meal service, some areas of the UK are trialling using biometric technology such as facial identity scanners to process payments. Is this something you’d be happy to see used in your child’s school?” One of three responses was allowed: yes, no and “I would like more information before agreeing”.

My friend selected “no”, but I wondered what would have happened if he had asked for more information before agreeing. Who would provide it? The company that stands to profit from his children’s faces? Fortunately, Defend Digital Me’s report, The State of Biometrics 2022: A Review of Policy and Practice in UK Education, was published last week, introduced by Fraser Sampson, the UK’s biometrics and surveillance camera commissioner. It is essential reading for anyone who cares about children.

 

First, it reminds us that the Protection of Freedoms Act 2012, which protects children’s biometrics (such as face and fingerprints), applies only in England and Wales. Second, it reveals that the information commissioner’s office has still not ruled on the use of facial recognition technology in nine schools in Ayrshire, which was reported in the media in October 2021, much less the legality of the other 70 schools known to be using the technology across the country. Third, it notes that the suppliers of the technology are private companies based in the UK, the US, Canada and Israel.

One of the suppliers, CRB Cunninghams, advertises that it scans children’s faces every three months

 

The report also highlights some gaping holes in our knowledge about the use of facial recognition technology in British schools. For instance, who in government approved these contracts? How much has this cost the taxpayer? Why is the government using a technology that is banned in several US states and which regulators in France, Sweden, Poland and Bulgaria have ruled unlawful on the grounds that it is neither necessary nor proportionate and does not respect children’s privacy? Why are British children’s rights not held to the same standard as their continental counterparts?

 

The report also warns that this technology does not just identify children or allow them to transact with their bodies. It can be used to assess their classroom engagement, mood, attentiveness and behaviour. One of the suppliers, CRB Cunninghams, advertises that it scans children’s faces every three months and that its algorithm “constantly evolves to match the child’s growth and change of appearance”.

So far, MPs have been strikingly silent on the use of such technology in schools. Instead, two members of the House of Lords have sounded the alarm. In 2019, Lord Clement-Jones put forward a private member’s bill for a moratorium and review of all uses of facial recognition technology in the UK. The government has yet to give this any serious consideration. Undaunted, his colleague Lord Scriven said last week that he would put forward a private member’s bill to ban its use in British schools.

It’s difficult not to wish the two lords well when you return to CRB Cunninghams’ boasts about its technology. “The algorithm grows with the child,” it proclaims. That’s great, then: what could go wrong?

Stephanie Hare is a researcher and broadcaster. Her new book is Technology Is Not Neutral: A Short Guide to Technology Ethics

 

Photograph: Getty Images/iStockphoto 

 

For original post, please visit: 

https://www.theguardian.com/commentisfree/2022/may/08/face-up-to-it-this-surveillance-of-kids-in-schools-is-creepy 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

The Datafication of Student Life and the Consequences for Student Data Privacy // Kyle M.L. Jones 

The Datafication of Student Life and the Consequences for Student Data Privacy // Kyle M.L. Jones  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it
By Kyle M. L. Jones (MLIS, PhD)
Indiana University-Indianapolis (IUPUI)
 

"The COVID-19 pandemic changed American higher education in more ways than many people realize: beyond forcing schools to transition overnight to fully online learning, the health crisis has indirectly fueled institutions’ desire to datafy students in order to track, measure, and intervene in their lives. Higher education institutions now collect enormous amounts of student data, by tracking students’ performance and behaviors through learning management systems, learning analytic systems, keystroke clicks, radio frequency identification, and card swipes throughout campus locations. How do institutions use all this data, and what are the implications for student data privacy? Are the technologies as effective as institutions claim? This blog explores these questions and calls for higher education institutions to better protect students, their individuality, and their power to make the best choices for their education and lives.

When the pandemic prevented faculty and students from accessing their common campus haunts, including offices and classrooms, they relied on technologies to fill their information, communication, and education needs. Higher education was arguably better prepared than other organizations and institutions for immersive online education. For decades, universities and colleges have invested significant resources in networking infrastructures and applications to support constant communication and information sharing. Educational technologies, such as learning management systems (LMSs) licensed by Instructure (Canvas) and Blackboard, and productivity tools such as Microsoft’s Office365 are ubiquitous in higher education. So, while the transition to online education was difficult for some in pedagogical terms, the technological ability to do so was not: higher education was prepared.

 

Datafication Explained: How Institutions Quantify Students

The same technological ubiquity that has helped higher education succeed during the pandemic has also fueled institutions’ growing desire to datafy students for the purposes of observing, measuring, and intervening in their lives. These practices are not new to universities and colleges, who have long held that creating education records about students supports administrative record keeping and instruction. But data and informational conditions today are much different than just 10 to 20 years ago: the ability to track, capture, and analyze a student’s online information behaviors, communications, and system actions (e.g., clicks, keystrokes, facial movements), not to mention their granular academic history, is possible.

In non-pandemic times, when students are immersed in campus life, myriad sensors (e.g., WiFi, RFID) and systems (e.g., building and transactional card swipes) associated with a specific location also make it possible to analyze a student’s physical movements. These data points enable institutions to track where a student has been and with whom that student has associated, by examining similar patterns in the data.

How are institutions and the educational technology (edtech) companies they rely on using their growing stores of data? There have been infamous cases over the years, such as the Mount St. Mary’s “drown the bunnies” fiasco, when the previous president attempted to use predictive measures to identify and force out students unlikely to achieve academic success and be retained. Then-president Simon Newman, who was eventually fired, argued, “This is hard for [the faculty] because you think of the students as cuddly bunnies, but you can’t…. You just have to drown the bunnies…. put a Glock to their heads.” At the University of Arizona, its “Smart Campus research” aims to “repurpose the data already being captured from student ID cards to identify those most at risk for not returning after their first year of college.” It used student ID card data to track and measure social interactions through time-stamp and geolocation metadata. The analysis enabled the university to map student interactions and their social networks, all for the purpose of predicting a student’s likelihood of being retained. 

Edtech has also invested heavily in descriptive and predictive analytic capabilities, sometimes referred to as learning analytics. Common LMSs often record and share descriptive statistics with instructors concerning which pages and resources (e.g., PDFs, quizzes, etc.) a student has clicked on; some instructors use the data to create visualizations to make students aware of their engagement levels in comparison to their peers in a course. Other companies use their access to real-time system data and the students who create the data, to run experiments. Pearson gained attention for its use of social-psychological interventions on over 9,000 students at 165 institutions to test “whether students who received the messages attempted and completed more problems than their counterparts at other institutions.” While some characterize Pearson’s efforts as simple A/B testing, often used to examine interface tweaks on websites and applications, Pearson did the interventions based on its own ethical review, without input from any of the 165 institutions and without students’ consent.

 

Is Datafication Worth It? Privacy Considerations

The higher education data ecosystem and the paths it opens for universities, edtech, and other third-party actors to use it raises significant questions about the effects on students’ privacy. The datafication of student life may lead institutions to improve student learning as well as retention and graduation rates. Maybe studying student life at a granular, identifiable level, or even at broader subgroup levels, improves institutional decision making and improves an institution’s financial situation. But what are the costs of these gains? The examples above, many of which I have more comprehensively summarized and analyzed elsewhere, point to clear issues. 

Chief among them is privacy. It is not normative for institutions—or the companies they contract for services—to expose a student’s life, regardless of the purposes or justifications. Yet, universities and colleges continue to push the point that they can do so and are often justified in doing so if it improves student success. But student success is a broad term. Whose success matters and warrants the intrusion? Often an analytic, especially a predictive measure, requires historical data, meaning that one student’s life is made analyzable only for another student downstream to benefit months or years later. And how do institutions define success? Student success may be learning gains, but education institutions often construe it as retention and graduation, which are just proxies. 

When institutions datafy student life for some purpose other than to directly help students, they treat students as objects—not human beings with unique interests, goals, and autonomy over their lives. Institutions and others can use data and related artifacts to guide, nudge, and even manipulate student choices with an invisible hand, since students are rarely aware of the full reach of an institution’s data infrastructure. Students trust that institutions will protect identifiable data and information, but that trust is misplaced if institutions 1) are not transparent about their data practices and 2) do not enable students to make their own privacy choices to the greatest extent possible. Student privacy policy is often difficult for students to understand and locate. 

 

Moreover, institutions need to justify their analytic practices. They should provide an overview of the intention of their practice and explain the empirical support for that justification. If the practice is experimental, institutions must communicate that they have no clear evidence that the practice will produce benefits for students. If science supports the practice, institutions should provide that science to students to review and summarize. 

Many other policy and practice recommendations are relevant, as the literature outlines ethics codes, philosophical arguments, and useful principles for practice. The key point here is that the datafication of student life and the privacy problems it creates are justified only if higher education institutions protect students and put their interests first, treat students as humans, and respect their choices about their lives."

 

To view original post, please visit:

https://studentprivacycompass.org/the-datafication-of-student-life-and-the-consequences-for-student-data-privacy/ 

paul's curator insight, August 23, 2022 9:28 PM

<a rel=nofollow target= _blank href=” https://209primersglock.com/contact-us / ” rel=”dofollow”>209 primers glock shop </a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/shipping / ” rel=”dofollow”>209 primers glock shop</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/winchester-large-rifle-primer-2000-rounds/ ” rel=”dofollow”>WINCHESTER LARGE RIFLE PRIMERS</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product-category/gun-powder / ” rel=”dofollow”>GUN POWDER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product-category/ammo / ” rel=”dofollow”>AMMO</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product-category/firearms/ ” rel=”dofollow”>fFIREARMS</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product-category/glock/ ” rel=”dofollow”>GLOCK</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product-category/primers/ ” rel=”dofollow”>PRIMERS</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/accurate-2015-smokeless-gun-powder/ ” rel=”dofollow”>ACCURATE 2015 SMOKELESS GUN POWDER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/winchester-w209-shotgun-primers/ ” rel=”dofollow”>WINCHESTER W209 SHORTGUN PRIMERS</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/223-remington-55-grain-usa-lake-city-full-metal-jacket-1000-round-case/ ” rel=”dofollow”>223 REMINGTON 55 GRAIN USA LAKE CITY FULL METAL JACKET 1000 ROUND CASE</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/accurate-2230-smokeless-gun-powder/ ” rel=”dofollow”>ACCURATE 2230 SMOKELESS GUN POWDER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/cci-small-pistol-magnum-primers-550/ ” rel=”dofollow”>CCI SMALL PISTOL MAGNUM PRIMERS 550</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/cci-large-pistol-primers-300-box-of-1000 / ” rel=”dofollow”>CCI LARGE PISTOL PRIMERS 300 BOX OF 1000</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/winchester-small-rifle-primer/ ” rel=”dofollow”>WINCHESTER SMALL RIFLE PRIMER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/2022/02/05/ ” rel=”dofollow”>2022/02/05</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/45-acp-ammo / ” rel=”dofollow”>45 ACP AMMO</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/45acp / ” rel=”dofollow”>45 ACP</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/45acp-ammo / ” rel=”dofollow”>45ACP AMMO</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/6-5-prc-ammo / ” rel=”dofollow”>6 5 PRC AMMO</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/accurate-2200-smokeless-gun-powder / ” rel=”dofollow”>ACCURATE 2200 SMOKELESS GUN POWDER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/”accurate-powder-4100-1lb /” rel=”dofollow”>ACCURATE POWDER 4100 1IB</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/benchrest-primers / ” rel=”dofollow”>BENCHREST PIMERS</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/benelli-m4 /” rel=”dofollow”>BENELLI M4</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/federal-premium-champion-205-small-rifle-primer-2000-rounds / ” rel=”dofollow”>FEDERAL PRIMERS CHAMPION 205 SMALL RIFLE PRIMER 2000 ROUNDS</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/federal-large-rifle-primer / ” rel=”dofollow”>FEDERAL LARGE RIFLE PRIMER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/cci-large-rifle-primers-200-box-of-1000 / ” rel=”dofollow”>CCI LARGE RIFLE PRIMERS 200 BOX OF 1000</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/large-pistol-primer / ” rel=”dofollow”>LARGE PISTOL PRIMER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/hodgdon-h1000 / ” rel=”dofollow”>HODGDON H1000</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/hodgdon-h322-smokeless-gun-powder / ” rel=”dofollow”>HODGDON H322 SMOKELESS GUN POWDER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/hodgdon-h335 / ” rel=”dofollow”>HODGDON H335</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/hodgdon-h4350-smokeless-powder / ” rel=”dofollow”>HODGDON H4350 SMOKELESS POWDER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/hodgdon-h4831sc-smokeless-gun-powder /” rel=”dofollow”>HODGDON H4831SC SMOKELESS GUN POWDER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/hodgdon-hs6-smokeless-gun-powder / ” rel=”dofollow”>HODGDON HS6 SMOKELESS GUN POWDER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/hodgdon-longshot-smokeless-gun-powder / ” rel=”dofollow”>HODGDON LONGSHOT SMOKELESS GUN POWDER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/hodgdon-varget-smokeless-gun-powder /” rel=”dofollow”>HODGDON VARGET SMOKELESS GUN POWDER </a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/fx9-canada / rel=”dofollow”>FX9 CANADA</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/gsg-16 /” rel=”dofollow”>GSG 16</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/gsg-16-magazine/ ” rel=”dofollow”>product/GSG 16 MAGAZINE</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/henry-long-ranger-canada /” rel=”dofollow”>HENRY LONG RANGE CANADA</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/lever-action-shotgun /” rel=”dofollow”>LEVER ACTION SHORTGUN</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/m1-garand-canada /” rel=”dofollow”>M1 GARAND CANADA</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/sks-canada /” rel=”dofollow”>SKS CANADA</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/type-81 /” rel=”dofollow”>TYPE 81</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/wk180-c /” rel=”dofollow”>WK180 C</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/ws-mcr /” rel=”dofollow”>WS MCR</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/glock-17 /” rel=”dofollow”>GLOCK 17</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/glock-19-canada / ” rel=”dofollow”>GLOCK 17 CANADA</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/glock-48/ ” rel=”dofollow”>GLOCK 48</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/keltec-p17 /” rel=”dofollow”>KELTEC P17</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/federal-large-magnum-rifle-primer /” rel=”dofollow”>FEDERAL LARGE MAGNUM RIFLE PRIMER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/hybrid-100v /” rel=”dofollow”>HYBRID 100V</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/x95-bolt-carrier /” rel=”dofollow”>X95 BOLT CARRIER</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/category/uncategorized /” rel=”dofollow”>UNCATEGORIZED</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/2022/02/05/firearms-outlet-canada / ” rel=”dofollow”>FIREARMS OUTLET CANADA</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/glock-19-gen-4-for-sale/” rel=”dofollow”>GLOCK 19 GEN 4 FOR SALE</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/glock-17-gen-4-for-sale/”rel=”dofollow”>GLOCK 17 GEN 4 FOR SALE</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/glock-40-g4-10mm-for-sale /” rel=”dofollow”>GLOCK 40 GEN 4 10MM FOR SALE</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/order-remington-core-lokt-270 /” rel=”dofollow”>ORDER REMINGTON CORE LOKT 270</a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/glock-18c-gen3-for-sale /” rel=”dofollow”>GLOCK 18C GEN 3 FOR SALE </a>
<a rel=nofollow target= _blank href=” https://209primersglock.com/product/glock-26-gen-5 /” rel=”dofollow”>GLOCK 26 GEN 5</a>
https://209primersglock.com/product/benelli-m4/
https://209primersglock.com/product/sig-tango-msr-1-10/
https://209primersglock.com/product/micro-draco-for-sale /
https://209primersglock.com/product/sig-p365xl /
https://209primersglock.com/product/taurus-g3c /
https://209primersglock.com/product/44mag-brass /
https://209primersglock.com/product/209-primers-shotshell-for-sale-in-stock /
https://209primersglock.com/product/federal-209a-primers-shotshell-box-of-1000 /
https://209primersglock.com/product/45-70-brass /
https://209primersglock.com/product/6-5-grendel-ammo /
https://209primersglock.com/product/22-hornet-ammo /

Scooped by Roxana Marachi, PhD
Scoop.it!

FTC Accuses Chegg Homework Help App of ‘Careless’ Data Security // The New York Times

FTC Accuses Chegg Homework Help App of ‘Careless’ Data Security // The New York Times | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Natasha Singer

"The Federal Trade Commission on Monday cracked down on Chegg, an education technology firm based in Santa Clara, Calif., saying the company’s “careless” approach to cybersecurity had exposed the personal details of tens of millions of users.

 

In a legal complaint, filed on Monday morning, regulators accused Chegg of numerous data security lapses dating to 2017. Among other problems, the agency said, Chegg had issued root login credentials, essentially an all-access pass to certain databases, to multiple employees and outside contractors. Those credentials enabled many people to look at user account data, which the company kept on Amazon Web Services’ online storage system.

 

As a result, the agency said, a former Chegg contractor was able to use company-issued credentials to steal the names, email addresses and passwords of about 40 million users in 2018. In certain cases, sensitive details on students’ religion, sexual orientation, disabilities and parents’ income were also taken. Some of the data was later found for sale online.

 

Chegg’s popular homework help app is used regularly by millions of high school and college students. To settle the F.T.C.’s charges, the agency said Chegg had agreed to adopt a comprehensive data security program.

 

In a statement, Chegg said data privacy was a top priority for the firm and that the company had worked with the F.T.C. to reach a settlement agreement. The company said it currently has robust security practices, and that the incidents described in the agency’s complaint had occurred more than two years ago. Only a small percentage of users had provided data on their religion and sexual orientation as part of a college scholarship finder feature, the company said in the statement.

“Chegg is wholly committed to safeguarding users’ data and has worked with reputable privacy organizations to improve our security measures and will continue our efforts,” the statement said.

 

The F.T.C.’s enforcement action against Chegg, a prominent industry player, amounts to a warning to the U.S. education technology industry.

 

Since the early days of the pandemic in 2020, the education technology sector has enjoyed a surge in customers and revenue. To enable remote learning, many schools and universities rushed to adopt digital tools like exam-proctoring software, course management platforms and video meeting systems.

 
Students and their families, too, turned in droves to online tutoring services and study aids like math apps. Among them, Chegg, which had a market capitalization of $2.7 billion at the end of trading on Monday, reported annual revenues of $776 million for 2021, an increase of 20 percent from the previous year.
 

Some online learning systems proved so useful that many students, and their educational institutions, continued to use the tools even after schools and colleges returned to in-person teaching.

But the fast growth of digital learning tools during the pandemic also exposed widespread flaws.

 

Many online education services record, store and analyze a trove of data on students’ every keystroke, swipe and click — information that can include sensitive details on children’s learning challenges or precise locations. Privacy and security experts have warned that such escalating surveillance may benefit companies more than students.

In March, Illuminate Education, a leading provider of student-tracking software, reported a cyberattack on certain company databases. The incident exposed the personal information of more than a million current and former students across dozens of districts in the United States — including New York City, the nation’s largest public school system.


In May, the F.T.C. issued a policy statement saying that it planned to crack down on ed tech companies that collected excessive personal details from schoolchildren or failed to secure students’ personal information.


The F.T.C. has a long history of fining companies for violating children’s privacy on services like YouTube and TikTok. The agency is able to do so under a federal law, the Children’s Online Privacy Protection Act, which requires online services aimed at children under 13 to safeguard youngsters’ personal data and obtain parental permission before collecting it.


But the federal complaint against Chegg represents the first case under the agency’s new campaign focused specifically on policing the ed-tech industry and protecting student privacy. In the Chegg case, the homework help platform is not aimed at children, and the F.T.C. did not invoke the children’s privacy law. The agency accused the company of unfair and deceptive business practices.

Chegg was founded in 2005 as a textbook rental service for college students. Today it is an online learning giant that rents e-textbooks.

 

But it is most known as a homework help platform where, for $15.95 per month, students can find ready answers to millions of questions on course topics like relativity or mitosis. Students may also ask Chegg’s online experts to answer specific study or test questions they have been assigned.

Teachers have complained that the service has enabled widespread cheating. Students even have a nickname for copying answers from the platform: “chegging.”

Chegg’s privacy policy promised users that the company would take “commercially reasonable security measures to protect” their personal information. Chegg’s scholarship finder service, for instance, collected information like students’ birth dates as well as details on their religion, sexual orientation and disabilities, the F.T.C. said.

 

But regulators said the company failed to use reasonable security measures to protect user data, even after a series of security lapses that enabled intruders to gain access to sensitive student data and employees’ financial information.

As part of the consent agreement proposed by the F.T.C., Chegg must provide security training to employees and encrypt user data. Chegg must also give consumers access to the personal information it has collected about them — including any precise location data or persistent identifiers like IP addresses — and enable users to delete their records.

Other online learning services may also hear from regulators. The F.T.C. disclosed in July that it was pursuing a number of nonpublic investigations into ed tech providers.

“Chegg took shortcuts with millions of students’ sensitive information,” Samuel Levine, the director of the agency’s Bureau of Consumer Protection, said in a news release on Monday. “The commission will continue to act aggressively to protect personal data.”

 

Natasha Singer is a business reporter covering health technology, education technology and consumer privacy. @natashanyt"

 

For original article, please visit: 

https://www.nytimes.com/2022/10/31/business/ftc-chegg-data-security-legal-complaint.html 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Commentary: Keep facial recognition out of New York schools // Arya and Loshkajian (2022), Times Union 

Commentary: Keep facial recognition out of New York schools // Arya and Loshkajian (2022), Times Union  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Mahima Arya and Nina Loshkajian

"In 2020, New York became a national civil rights leader, the first state in the country to ban facial recognition in schools. But almost two years later, state officials are examining whether to reverse course and give a passing grade to this failing technology.

Wasting money on biased and faulty tech will only make schools a harsher, more dangerous environment for students, particularly students of color, LGBTQ+ students, immigrant students, and students with disabilities. Preserving the statewide moratorium on biometric surveillance in schools will protect our kids from racially biased, ineffective, unsecure and dangerous tech.

 

Biometric surveillance depends on artificial intelligence, and human bias infects AI systems. Facial recognition software programmed to only recognize two genders will leave transgender and nonbinary individuals invisible. A security camera that learns who is “suspicious looking” using pictures of inmates will replicate the systemic racism that results in the mass incarceration of Black and brown men. Facial recognition systems may be up to 99 percent accurate on white men, but can be wrong more than one-in-three times for some women of color.

 
 

What’s worse, facial recognition technology has even higher inaccuracy rates when used on students. Voice recognition software, another widely known biometric surveillance tool, echoes this pattern of poor accuracy for those who are nonwhitenon-male, or young.

The data collected by biometric surveillance technologies is vulnerable to a variety of security threats, including hacking, data breaches and insider attacks. This data – which includes scans of facial features, fingerprints, and irises – is unique and highly sensitive, making it a valuable target for hackers and, once compromised, impossible to reissue like you would a password or PIN. Collecting and storing biometric data in schools, which tend to have inadequate cybersecurity practices, puts children at great risk of being tracked and targeted by malicious actors. There is absolutely no need to expose children to these privacy and safety risks.

 

The types of biometric surveillance technology being marketed to schools are widely recognized as dangerous. One particularly controversial vendor of facial recognition technology, Clearview AI, has reportedly tested or implemented its systems in more than 50 educational institutions across 24 states. Other countries have started to appreciate the threat Clearview poses to privacy, with Australia recently ordering it to cease its scraping of images. And last year, privacy groups in Austria, France, Greece, Italy and the U.K. filed legal complaints against Clearview. All while the company continues to market its products to schools in the U.S.

 

As the world begins to wake up to the risks of using facial recognition, New York should not make the mistake of allowing young kids to be subjected to its harms. Additionally, one study found that CCTV systems in U.K. secondary schools led many students to suppress their expressions of individuality and alter their behavior. Normalizing biometric surveillance will bring about a bleak future for kids at schools across the country.

New York shouldn’t waste money on tech that criminalizes and harms young people. Most school shootings are committed by current students or alumni of the school in question, faces of whom would not be flagged as suspicious by facial recognition systems. And even if the technology were to flag a real potential perpetrator of violence, given the speed at which most school shootings usually come to an end, it is unlikely that law enforcement would be notified and able to arrive to the scene in time to prevent such horrendous acts.

Students, parents and stakeholders have the opportunity to submit a brief survey to let the State Education Department know that they want facial recognition and other biased AI out of their schools, not just temporarily but permanently. New York must at least extend the moratorium on biometric surveillance in schools, and ultimately should put an end to the use of such problematic technology altogether."


Mahima Arya is a computer science fellow at the Surveillance Technology Oversight Project (S.T.O.P.), a human rights fellow at Humanity in Action, and a graduate of Carnegie Mellon University. Nina Loshkajian is a D.A.T.A. Law Fellow at S.T.O.P. and a graduate of New York University School of Law.

 

https://www.timesunion.com/opinion/article/Commentary-Keep-facial-recognition-out-of-New-17523857.php 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Los Angeles Unified, Feds Investigating As Ransomware Attack Cripples IT Systems //  THE Journal 

Los Angeles Unified, Feds Investigating As Ransomware Attack Cripples IT Systems //  THE Journal  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"A ransomware attack over Labor Day weekend brought to a standstill the online systems of Los Angeles Unified School District, the second-largest K–12 district in the country with about 640,000 students, LAUSD officials confirmed this morning in a statement on its website.""

 

https://thejournal.com/articles/2022/09/06/los-angeles-unified-feds-investigating-as-ransomware-attack-cripples-it-systems.aspx?s=the_nu_060922&oly_enc_id=8831J2755401H5M 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

A Billion-Dollar Crypto Gaming Startup Axie Infinity (AXS) Promised Riches and Delivered Disaster // Bloomberg 

A Billion-Dollar Crypto Gaming Startup Axie Infinity (AXS) Promised Riches and Delivered Disaster // Bloomberg  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Joshua Brustein
"Over the course of his life, Alejo Lopez de Armentia has played video games for a variety of reasons. There was the thrill of competition, the desire for companionship, and, at base, the need to pass the time. In his 20s, feeling isolated while working for a solar panel company in Florida, he spent his evenings using video games as a way to socialize with his friends back in Argentina, where he grew up.


But 10 months ago, Armentia, who’s 39, discovered a new game, and with it a new reason to play: to earn a living. Compared with the massively multiplayer games that he usually played, Axie Infinity was remarkably simple. Players control three-member teams of digital creatures that fight one another. The characters are cartoonish blobs distinguished by their unique mixture of interchangeable body parts, not unlike a Mr. Potato Head. During “combat” they cheerily bob in place, waiting to take turns casting spells against their opponents. When a character is defeated, it becomes a ghost; when all three squad members are gone, the team loses. A match takes less than five minutes.

 

Even many Axie regulars say it’s not much fun, but that hasn’t stopped people from dedicating hours to researching strategies, haunting Axie-themed Discord channels and Reddit forums, and paying for specialized software that helps them build stronger teams. Armentia, who’s poured about $40,000 into his habit since last August, professes to like the game, but he also makes it clear that recreation was never his goal. “I was actually hoping that it could become my full-time job,” he says.

The reason this is possible—or at least it seemed possible for a few weird months last year—is that Axie is tied to crypto markets. Players get a few Smooth Love Potion (SLP) tokens for each game they win and can earn another cryptocurrency, Axie Infinity Shards (AXS), in larger tournaments. The characters, themselves known as Axies, are nonfungible tokens, or NFTs, whose ownership is tracked on a blockchain, allowing them to be traded like a cryptocurrency as well.

There are various ways to make money from Axie. Armentia saw his main business as breeding, which doesn’t entail playing the game so much as preparing to play it in the future. Players who own Axies can create others by choosing two they already own to act as parents and paying a cost in SLP and AXS. Once they do this and wait through an obligatory gestation period, a new character appears with some combination of its parents’ traits.

 

Every new Axie player needs Axies to play, pushing up their price. Armentia started breeding last August, at a time when normal economics seemed not to apply. “You would be making 300%, 400% on your money in five days, guaranteed,” he says. “It was stupid.”

Axie’s creator, a startup called Sky Mavis Inc., heralded all this as a new kind of economic phenomenon: the “play-to-earn” video game. “We believe in a world future where work and play become one,” it said in a mission statement on its website. “We believe in empowering our players and giving them economic opportunities. Welcome to our revolution.” By last October the company, founded in Ho Chi Minh City, Vietnam, four years ago by a group of Asian, European, and American entrepreneurs, had raised more than $160 million from investors including the venture capital firm Andreessen Horowitz and the crypto-focused firm Paradigm, at a peak valuation of about $3 billion. That same month, Axie Infinity crossed 2 million daily users, according to Sky Mavis.

 

If you think the entire internet should be rebuilt around the blockchain—the vision now referred to as web3Axie provided a useful example of what this looked like in practice. Alexis Ohanian, co-founder of Reddit and an Axie investor, predicted that 90% of the gaming market would be play-to-earn within five years. Gabby Dizon, head of crypto gaming startup Yield Guild Games, describes Axie as a way to create an “investor mindset” among new populations, who would go on to participate in the crypto economy in other ways. In a livestreamed discussion about play-to-earn gaming and crypto on March 2, former Democratic presidential contender Andrew Yang called web3 “an extraordinary opportunity to improve the human condition” and “the biggest weapon against poverty that we have.”

By the time Yang made his proclamations the Axie economy was deep in crisis. It had lost about 40% of its daily users, and SLP, which had traded as high as 40¢, was at 1.8¢, while AXS, which had once been worth $165, was at $56. To make matters worse, on March 23 hackers robbed Sky Mavis of what at the time was roughly $620 million in cryptocurrencies. Then in May the bottom fell out of the entire crypto market." ... 

 

For full article, please visit:

https://www.bloomberg.com/news/features/2022-06-10/axie-infinity-axs-crypto-game-promised-nft-riches-gave-ruin 

Scooped by Roxana Marachi, PhD
Scoop.it!

After Huge Illuminate Data Breach, Ed Tech’s ‘Student Privacy Pledge’ Under Fire // The 74

After Huge Illuminate Data Breach, Ed Tech’s ‘Student Privacy Pledge’ Under Fire // The 74 | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it
 Big Tech's self-regulatory effort has long been accused of being toothless. Is that about to change?  
 
By Mark Keierleber  - July 24, 2022
"A few months after education leaders at America’s largest school district announced that a technology vendor had exposed sensitive student information in a massive data breach, the company at fault — Illuminate Education — was recognized with the software industry’s equivalent of the Oscars. 

 

Since that disclosure in New York City schools, the scope of the breach has only grown, with districts in six states announcing that some 3 million current and former students had become victims. Illuminate has never disclosed the full extent of the blunder, even as critics decry significant harm to kids and security experts question why the company is being handed awards instead of getting slapped with sanctions. 

Amid demands that Illuminate be held accountable for the breach — and for allegations that it misrepresented its security safeguards — the company could soon face unprecedented discipline for violating the Student Privacy Pledge, a self-regulatory effort by Big Tech to police shady business practices. In response to inquiries by The 74, the Future of Privacy Forum, a think tank and co-creator of the pledge, disclosed Tuesday that Illuminate could soon get the boot.

Forum CEO Jules Polonetsky said his group will decide within a month whether to revoke Illuminate’s status as a pledge signatory and refer the matter to state and federal regulators, including the Federal Trade Commission, for possible sanctions. 

“We have been reviewing the deeply concerning circumstances of the breach and apparent violations of Illuminate Education’s pledge commitments,” Polonetsky said in a statement to The 74. 

Illuminate did not respond to interview requests. 

In a twist, the pledge was co-created by the Software and Information Industry Association, the trade group that recognized Illuminate last month as being  among “the best of the best” in education technology. The pledge, created nearly a decade ago, is designed to ensure that education technology vendors are ethical stewards of kids’ most sensitive data. Its staunchest critics have assailed the pledge as being toothless — if not an outright effort to thwart meaningful government regulation. Now, they are questioning whether its response to the massive Illuminate breach will be any different. 

“I have never seen anybody get anything more than a slap on the wrist from the actual people controlling the pledge,” said Bill FItzgerald, an independent privacy researcher. Taking action against Illuminate, he said, “would break the pledge’s pretty perfect record for not actually enforcing any kind of sanctions against bad actors.”

Through the voluntary pledge, launched in 2014, hundreds of education technology companies have agreed to a slate of safety measures to protect students’ online privacy. Pledge signatories, including Illuminatehave promised they will not sell student data to third parties or use the information for targeted advertising. Companies that sign the commitment also agree to “maintain a comprehensive security program” to protect students’ personal information from data breaches. 

The privacy forum, which is funded by tech companies, has long maintained that the pledge is legally binding and offers assurances to school districts as they shop for new technology. In the absence of a federal consumer privacy law, the forum argues the pledge grants “an important and unique means for privacy enforcement,” giving the Federal Trade Commission and state attorneys general an outlet to hold education technology companies accountable via consumer protection rules that prohibit unfair and deceptive business practices. 

For years, critics have accused the pledge of providing educators and parents false assurances that a given product is safe, rendering it less useful than a pinky promise. Meanwhile, schools and technology companies have become increasingly entangled — particularly during the pandemic. As districts across the globe rushed to create digital classrooms, few governments checked to make sure the tech products officials endorsed were safe for children, according to a recent report by the Human Rights Watch. Shoddy student data practices by leading tech vendors, the group found, were rampant. Of the 164 tools analyzed, 89 percent “engaged in data practices that put children’s rights at risk,” with a majority giving student records to advertisers.

As companies suck up a mind-boggling amount of student information, a lack of meaningful enforcement has let tech companies off the hook for violating students’ privacy rights, said Hye Jung Han, a Human Rights Watch researcher focused on children. As a result, she said, students whose schools require them to use certain digital tools are being forced to “give up their privacy in order to learn.” Paired with large-scale data breaches, like the one at illuminate, she said students’ sensitive records could be misused for years. 

“Children, as we know, are more susceptible to manipulation based on what they see online,” she said. “So suddenly the information that’s collected about them in the classroom is being used to determine the kinds of content and the kinds of advertising that they see elsewhere on the internet. It can absolutely start influencing their worldviews.”

But the regulatory environment under the Biden administration may be entering a new, more aggressive era. The Federal Trade Commission announced in May that it would scale up enforcement on education technology companies that sell student data for targeted advertising and that “illegally surveil children when they go online to learn.” Even absent a data breach like the one at Illuminate, the commission wrote in a policy statement, education technology providers violate the federal Children’s Online Privacy Protection Act if they lack reasonable systems “to maintain the confidentiality, security and integrity of children’s personal information.” 

The FTC  declined to comment for this article. Jeff Joseph, president of the Software and Information Industry Association, said its recent awards were based on narrow criteria and judges “would not be expected to be aware of the breach unless the company disclosed it during the demos.” News of the breach was widely covered in the weeks before the June awards ceremony

The trade group “takes the privacy and security of student data seriously,” Joseph said in a statement, adding that the Future of Privacy Forum “maintains the day-to-day management of the pledge.” 

‘Absolutely concerning’

Concerns of a data breach at California-based Illuminate began to emerge in January when several of the privately held company’s popular digital tools, including programs used in New York City to track students’ grades and attendance, went dark. 

Yet it wasn’t until March that city leaders announced that the personal data of some 820,000 current and former students — including their eligibility for special education services and for free or reduced-price lunches — had been compromised in a data breach. In disclosing the breach, city education officials accused the company of misrepresenting its security safeguards. The Department of Education, which reportedly paid Illuminate $16 million over the last three years, told schools in May to stop using the company’s tools. 

A month later, officials at the New York State Education Department launched an investigation into whether the company’s data security practices ran afoul of state law, department officials said. Under the law, education vendors are required to maintain “reasonable” data security safeguards and must notify schools about data breaches “in the most expedient way possible and without unreasonable delay.” 

Outside New York City, state officials said the breach affected about 174,000 additional students across the state.

Doug Levin, the national director of The K12 Security Information eXchange, said the state should issue “a significant fine” to Illuminate for misrepresenting its security protocols to educators. Sanctions, he said, would “send a strong and very important signal that not only must you ensure that you have reasonable security in place, but if you say you do and you don’t, you will be penalized.” 

Meanwhile, Illuminate has since become the subject of two federal class-action lawsuits in New York and California, including one that alleges that students’ sensitive information “is now an open book in the hands of unknown crooks” and is likely being sold on the dark web “for nefarious and mischievous ends.” 

Plaintiff attorney Gary Graifman said that litigation is crucial for consumers because state attorneys general are often too busy to hold companies accountable. 

“There’s got to be some avenue of interdiction that occurs so that companies adhere to policies that guarantee people their private information will be secured,” he said. “Obviously if there is strong federal legislation that occurs in the future, maybe that would be helpful, but right now that is not the case.”

School districts in California, Colorado, Connecticut, Oklahoma and Washington have since disclosed to current and former students that their personal information had been compromised in the breach. But the full extent remains unknown because “Illuminate has been the opposite of forthcoming about what has occurred,” Levin said. 

Most states do not require companies to disclose data breaches to the public. Some 5,000 schools serving 17 million students use Illuminate tools, according to the company, which was founded in 2009.

“We now know that millions of students have been affected by this incident, from coast to coast in some of the largest school districts in the nation,” including in New York City and Los Angeles, Levin said. “That is absolutely concerning, and I think it shines a light on the role of school vendors,” who are a significant source of education data breaches. 

Nobody, including the National Security Agency, can guarantee that their cybersecurity infrastructure will hold up against motivated hackers, Levin said, but Illuminate’s failure to disclose the extent of the breach raises a major red flag. 

“The longer that Illuminate does not come clean with what’s happened, the worse it looks,” he said. “It suggests that this was maybe leaning on the side of negligence versus them being an unfortunate victim.”

‘A public relations tool’

When Illuminate signed the privacy pledge six years ago, it acknowledged the importance of protecting students’ data and said it offered a “secure online environment with data privacy securely in place.” On its website, Illuminate touts an “unwavering commitment to student data privacy,” and offers a link to the pledge. 

“By signing this pledge,” the company wrote in a 2016 blog post, “we are making a commitment to continue doing what we have already been doing from the beginning — promoting that student data be safeguarded and used for encouraging student and educator success.” 

Some pledge critics have accused tech companies of using it as a marketing tool. In 2018, a Duke Law and Technology Review report argued that pledge noncompliance was rampant and accused it of being “a mirage” that offered comfort to consumers “while providing little actual benefit.”... 

 

For full/original post, please visit:

https://www.the74million.org/article/after-huge-illuminate-data-breach-ed-techs-student-privacy-pledge-under-fire/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

How Amazon Operates in Education // Williamson et al., 2022, Code Acts in Education  

How Amazon Operates in Education // Williamson et al., 2022, Code Acts in Education   | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Ben Williamson, Kalervo N. Gulson, Carlo Perrotta and Kevin Witzenberger

 

"The global ‘big tech’ company Amazon is increasing its reach and power across a range of industries and sectors, including education. In a new paper for the special symposium ‘Platform Studies in Education’ in Harvard Educational Review, we conceptualize Amazon as a ‘state-like corporation’ influencing education through a ‘connective architecture’ of cloud computing, infrastructure and platform technologies. Like its retail and delivery logistics business it is operating at international scope and scale, and, congruent with Amazon’s growing influence across industries and sectors, possesses the power to reshape a wide range of educational practices and processes.

Our starting point is that education increasingly involves major technology companies, such as Google, Microsoft, and Amazon playing active roles as new kinds of networked governance actors. Infrastructures of test-based accountability and governance in education have long involved technical and statistical organizations. However, contemporary education governance is increasingly ‘data-driven’, using advanced technologies to collect and process huge quantities of digital information about student achievement and school and system performance.

In this context, new digitalized and datafied processes of education governance now involve multinational technology businesses offering infrastructure, platforms and data interoperability services. These connective architectures can affect the ways information is generated and used for institutional decision making, and also introduce new technical affordances into school practices, such as new platform-based learning, API-enabled integrations for increased interoperability, and advanced computing and data processing functionality from cloud infrastructures.

Our analysis focuses on Amazon, specifically its cloud computing subsidiary Amazon Web Services (AWS). Despite significant public, media, and regulatory attention to many of Amazon’s other activities and business practices, its activities in education remain only hazily documented or understood. AWS, we argue, enacts five distinctive operations in education.

Inscribing

The first part of our examination of AWS identifies how its corporate strategy underpins and infuses its objectives for education—a process we call inscribing to refer to the ways technology companies impress their business models on to the education sector. AWS is Amazon’s main profit engine, generating more than 60% of the corporation’s operating profits. Typifying the technoeconomic business model of big tech, it functions as a ‘landlord’ hosting industry, government, state and public sector operations on the cloud, while generating value from the ‘rent’ paid for on-demand access to cutting-edge cloud services, data processing, machine learning and artificial intelligence functionalities.

The ways this process of inscribing the business model on education takes place is evident in commercial marketing and discourse. AWS seeks to establish itself as an essential technical substrate of teaching, learning and administration, promoting its capacity to improve ‘virtual education’, ‘on-demand learning’ and ‘personalized learning’, and to support ‘digital transformation’ through ‘cloud-powered’ services like ‘campus automation’, ‘data analytics platforms’ and ‘artificial intelligence’. These promotional inscriptions paint a seductive picture of ‘pay-as-you-go’ educational improvement and seamless ‘plug-and-play’ transformation.

Beyond being discursive, these transformations require very specific kinds of contractual relations for cloud access, pay-as-you-go plans, and data agreements as per the AWS business model. AWS thus discursively inscribes and materially enacts its business model within education, impressing the techno-economic model of cloud tenancy, pay-as-you-go subscription rents, and computational outsourcing on to the education sector—potentially affecting some of the core functions of education in its pursuit of valuable rent and data extraction. Through this strategy, AWS is fast becoming a key cloud landlord for the education sector, governing the ways schools, colleges and edtech companies can access and use cloud services and digital data, while promoting a transformational vision of education in which its business interests might thrive.

Habituating

The second architectural operation of AWS is its techniques for accustoming users to the functionality of the cloud. We term this habituating users to AWS, or synchronizing human skills to the cloud. It does so through AWS Educate, an educational skills program designed to develop teachers and students’ competencies in cloud computing and readiness for ‘cloud careers’. AWS Educate seeks to establish a positive educational discourse of ‘the cloud’, whereby educators and students are encouraged to develop their skills with AWS services and tools for future personal success, thereby connecting hundreds of thousands of students, educators and institutions and accustoming current and future users to the AWS architecture.

With stated aims to reach 29 millions learners worldwide by 2025, key features of AWS Educate include Cloud Career Pathways and Badges, with dedicated technical courses and credentials aligned to industry job roles like cloud computing engineer and data scientist.  These credentials are underpinned by the Cloud Competency Framework, a global standard used to create, assess, and measure AWS Educate cloud programs informed by the latest labour market data on in-demand jobs. This strategy also serves the goal of increasing user conversions and further AWS adoption and expansion, advancing the business aim of converting user engagement into habitual long-term users as a route to future revenue streams.

In short, through its habituating operations, AWS promotes a normative vision of education as electronic micro-bundles of competency training and credentials, twinned with the habituation of users to its infrastructure. While serving its own revenue maximization prospects, AWS Educate challenges public education values of cultivating informed citizenship with values prioritizing a privatized and platformized education dedicated to the instrumentalist development of a future digital workforce.

Interfacing

The third operation enacted by AWS in education is interfacing. AWS provides new kinds of technical interfaces between educational institutions, intermediary partners, and the AWS infrastructure. This is exemplified by Amazon’s Alexa, a conversational interface, or voice assistant, that sits between users and AWS, and which AWS has begun promoting for integration into other educational applications. Its interfacing operations are achieved by the Alexa Education Skills Kit, a set of standards allowing Alexa to be embedded in third party products and services. We argue it illustrates how application programming interfaces (APIs) act as a connective tissue between powerful global data infrastructures, the digital education platform industry, and educational institutions.

For example, universities can develop their own Alexa Skills in the shape of institutionally branded voice interfaces for students to access coursework, grades and performance data; educators can embed Alexa in classes as voice-enabled quizzes and automated ‘study partners’; and institutions are encouraged to include Alexa Skills in ‘smart campus’ plans.  In these ways, the Alexa Skills Kit provides a set of new AWS-enabled, automated interfaces between institutions, staff and students, mediating an increasing array of institutional relations via the AWS cloud and the automated capacities of Alexa.

The Alexa Education Skills Kit is one of many APIs AWS provides for the educational sector to access fast, scalable, reliable, and inexpensive data storage infrastructures and cloud computing capacities. The integration of automated voice assistants through the Education Skills Kit provides educational institutions a gateway into the core functionality of AWS. These interfaces depend upon the automated collection and analysis of voice data on campuses, its automated analysis in the AWS cloud, and the production of automated feedback, so generating a cascade of automation within institutions that have synchronized their operations with AWS. It normalizes ideals of automation in education, including the extensive data collection and student monitoring that such automation entails. Through its interfacing operations, we therefore argue, AWS and Alexa are advancing cascading logics of automation further into everyday educational routines.

Platforming

Cloud computing establishes the social and technical arrangements that enable other technology companies to build and scale platforms. Amazon has developed an explicit market strategy in education by hosting—or platforming—the wider global industry of education technology on the AWS Cloud, specifically by providing the server hosting, data storage and analytics applications necessary for third parties to build and operate education platforms. Its AWS Imagine conference highlights its aspirations to host a huge range of edtech products and other services, and to guide how the industry imagines the future of education.

The role of AWS in platforming the edtech industry includes back-end server hosting and data storage as well as active involvement in startup development. Many of the globe’s largest and most highly capitalized edtech companies and education businesses are integrated into AWS. AWS support for the edtech industry encompasses data centre and network architecture to ensure that clients can scale their platform, along with data security and other AWS services including content delivery, database, AI, machine learning, and digital end user engagement services. This complete package enables edtech companies to deliver efficient computing, storage, scale, and reliability, and advanced features like data analytics and other AI services.

As such, through its platforming operations, AWS acts as an integral albeit largely invisible cloud presence in the back-end of a growing array of edtech companies. The business model of AWS, and the detailed contractual agreements that startups must sign to access AWS services, construct new kinds of dependencies and technical lock-ins, whereby the functionalities offered by third-party education platform companies can only exist according to the contractual rules and the cloud capacities and constraints of AWS. This puts AWS into a powerful position as a catalyst and accelerator of ‘digital transformation’ in education, ultimately responsible for re-tooling the industry for expanded scale, computational power, and data analytics functionality.

Re-infrastructuring

The final operation we detail is re-infrastructuring, referring to the migration of an educational institution’s digital infrastructure to AWS. It does so through AWS Migration services, and by providing institutions with a suite of data analytics, AI and machine learning functionalities. AWS promises that by ‘using the AWS Cloud, schools and districts can get a comprehensive picture of student performance by connecting products and services so they seamlessly share data across platforms’. AWS also promotes Machine Learning for Education to ‘identify at-risk students and target interventions’ and to ‘improve teacher efficiency and impact with personalized content and AI-enabled teaching assistants and tutors’. 

This seamless introduction of AI and automation is enabled by the formation of ‘data lakes’—a repository that hosts multiple types of data for machine learning analysis and visualization in the cloud. The process of ‘architecting a data lake‘ involves the deployment of multiple AWS products and functionalities, including those for pulling data seamlessly from student information and learning management systems, and for handling the ‘machine learning workload’ of analysis. AWS promotes full infrastructure migration to the cloud in terms of making everything from students and staff to estates and operational processes more intelligible from data, and thereby more amenable to targeted action or intervention.

Through cloud migration and data lake architecting, schools and universities are outsourcing a growing range of educational and administrative operations. This ultimately reflects a fresh hierarchical stratification of education, with AWS and its cloud firmly on top, followed by a sprawling ecology of edtech companies that mediate between AWS and the clients at the bottom: the schools and universities that form the data lakes from which AWS derives value. Yet, despite being highly consequential, these infrastructural rearrangements remain opaque, hidden in proprietorial ‘black boxes’, potentially resistant to autonomous institutional decisions, and extremely expensive and challenging to reverse.

‘Big tech’ and ‘state-like corporations’

One key implication we detail in the paper is the growing role of multinational ‘big tech’ companies in education, and the complex ways they are advancing longstanding reform efforts to privatize and commercialize public education, albeit through new techno-economic business models and practices. Social scientific and legal scholarship on private platforms and infrastructures has begun to contend with their growing social, technical and economic power, particularly their implications for key functions and processes traditionally considered the responsibility of state agencies or public sector organizations. As a corporate cloud company, Amazon is attempting to create market dominance and even monopoly power across a multitude of sectors and industries, raising sharp political and legal questions over the appropriate regulatory or antitrust measures to be taken.

Part of this competition is also for infrastructural dominance in education. The expansion of AWS signifies how the governance of the public sector and its institutions is becoming increasingly dependent on the standards and conditions set by multinational big tech corporations like Amazon and Google. Amazon is gathering significant power as what Marion Fourcade and Jeff Gordon term a ‘state-like corporation’. As a corporation with state-like powers, AWS can use its technical and economic capacity to influence diverse education systems and contexts, at international scale, and potentially to fulfil governance roles conventionally reserved for state departments and ministries of education.

As such, the continuing expansion of AWS into education, through the connective architecture we outline in the paper, might substitute existing models of governance and policy implementation with programmable rules and computer scripts for action that are enacted by software directly within schools and colleges rather than mandated from afar by policy prescriptions and proscriptions. As a state-like corporation with international reach and market ambitions, AWS is exceeding the jurisdictional authority of policy centres to potentially become the default digital architecture for governing education globally."

The full paper is available (paywalled) at Harvard Educational Review, or freely available in manuscript form.

 

Please read original post at:
https://codeactsineducation.wordpress.com/2022/07/12/how-amazon-operates-in-education/ 

 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

"Metaverse: Another cesspool of toxic content" [Report] // SumOfUs

To download report above, click on title, arrow, or link below. 

https://www.sumofus.org/images/Metaverse_report_May_2022.pdf 

 
See also: 

May 23, 2022
"The report from SumOfUs highlights the staggering amount of harms found on Meta’s Horizon Worlds – as investors gather to vote on metaverse human rights assessment 

 

San Francisco - A researcher was sexually harassed and assaulted (virtually), and witnessed gun violence and homophobic slurs within hours of entering Meta’s new virtual reality platform, Horizon Worlds. 

 

Within about an hour of being on the platform, the researcher, posing as a 21 year old woman of color, was led to a private room at a house party where she was sexually assaulted, while a second user watched. View the clip here.

 

The findings of the investigation conducted by corporate accountability group, SumOfUs, comes days before investors are due to vote on a shareholder resolution, co-filed by SumOfUs with Arjuna Capital, that demands Meta undertake a human right impact assessment of its metaverse plans. 

 

The research is further evidence that Meta’s light touch approach to moderation is allowing toxic behavior to already take root on its VR platforms, including sexual harassment and predatory behaviour towards female- appearing and female-sounding avatars.

 

Rewan Al-Hadad, SumOfUs campaign director  said: “As it stands now, the metaverse is not safe, and based on Meta’s stance on how it will moderate the platform, it will continue to spiral into a dark abyss. Our researcher went from donning an oculus headset for the first time, to being virtually raped in less than an hour. And this isn’t a one-off account. Mark Zuckerberg claims he wants to connect the world – but what he’s doing is exposing people to seriously harmful encounters in a desperate attempt to save his company.” 

 

Multiple researchers and users have reported similar experiences of sexual violence, hate speech and graphic content on Meta’s VR platforms, as well as on non-Meta apps that are able to be accessed through an Oculus headset. This is despite Meta promises to improve  safety measures (1) and implement community guidelines. (2)

 

Last week Nick Clegg wrote that Metaverse moderation would be different to the active policing of problematic content on the Facebook platform but offered little detail about how this would work in practice.

 

In addition, SumOfUs and other groups as part of the Make Mark Listen campaign are calling for better governance of the company through shareholder resolution 4 demanding an assessment of the Audit and Risk Oversight Committee’s capacities and performance in overseeing company risks to public safety and the public interest.

 

Notes to editors:

1. Meta. Notice of Monitoring and Recording to Improve Safety in Horizon Worlds. 2022. https://store.facebook.com/legal/quest/monitoring-recording-safety-horizon/?utm_source=https%3A%2F%2Fwww.google.com%2F&utm_medium=organicsearch.

2. Meta. Conduct in VR Policy. 2022. https://store.facebook.com/help/quest/articles/accounts/privacy-information-and-settings/conduct-in-vr-policy/

 

For full summary above:

https://www.sumofus.org/media/new-research-documents-sexual-assault-within-hours-of-entering-metas-virtual-reality-platform/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Accused of Cheating by an Algorithm, and a Professor She Had Never Met // The New York Times

Accused of Cheating by an Algorithm, and a Professor She Had Never Met // The New York Times | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Kashmir Hill

"A Florida teenager taking a biology class at a community college got an upsetting note this year. A start-up called Honorlock had flagged her as acting suspiciously during an exam in February. She was, she said in an email to The New York Times, a Black woman who had been “wrongfully accused of academic dishonesty by an algorithm.”

 

What happened, however, was more complicated than a simple algorithmic mistake. It involved several humans, academic bureaucracy and an automated facial detection tool from Amazon called Rekognition. Despite extensive data collection, including a recording of the girl, 17, and her screen while she took the test, the accusation of cheating was ultimately a human judgment call: Did looking away from the screen mean she was cheating?

 

The pandemic was a boom time for companies that remotely monitor test takers, as it became a public health hazard to gather a large group in a room. Suddenly, millions of people were forced to take bar exams, tests and quizzes alone at home on their laptops. To prevent the temptation to cheat, and catch those who did, remote proctoring companies offered web browser extensions that detect keystrokes and cursor movements, collect audio from a computer’s microphone, and record the screen and the feed from a computer’s camera, bringing surveillance methods used by law enforcement, employers and domestic abusers into an academic setting.

 

Honorlock, based in Boca Raton, Fla., was founded by a couple of business school graduates who were frustrated by classmates they believed were gaming tests. The start-up administered nine million exams in 2021, charging about $5 per test or $10 per student to cover all the tests in the course. Honorlock has raised $40 million from investors, the vast majority of it since the pandemic began.

 

Keeping test takers honest has become a multimillion-dollar industry, but Honorlock and its competitors, including ExamSoft, ProctorU and Proctorio, have faced major blowback along the way: widespread activismmedia reports on the technology’s problems and even a Senate inquiry. Some surveilled test takers have been frustrated by the software’s invasivenessglitchesfalse allegations of cheating and failure to work equally well for all types of people.

 

The Florida teenager is a rare example of an accused cheater who received the evidence against her: a 50-second clip from her hourlong Honorlock recording. She asked that her name not be used because of the stigma associated with academic dishonesty.

Flagged

The teenager was in the final year of a special program to earn both her high school diploma and her associate degree. Nearly 40 other students were in the teenager’s biology class, but they never met. The class, from Broward College, was fully remote and asynchronous.

Asynchronous online education was growing even before the pandemic. It offers students a more flexible schedule, but it has downsides. Last year, an art history student who had a question about a recorded lecture tried to email his professor, and discovered that the man had died nearly two years earlier.

 

The Florida teenager’s biology professor, Jonelle Orridge, was alive, but distant, her interactions with students taking place by email, as she assigned readings and YouTube videos. The exam this past February was the second the teenager had taken in the class. She set up her laptop in her living room in North Lauderdale making sure to follow a long list of rules set out in the class syllabus and in an Honorlock drop-down menu: Do not eat or drink, use a phone, have others in the room, look offscreen to read notes, and so on.

 

The student had to pose in front of her laptop camera for a photo, show her student ID, and then pick her laptop up and use its camera to provide a 360-degree scan of the room to prove she didn’t have any contraband material. She didn’t mind any of this, she said, because she hoped the measures would prevent others from cheating.

 

She thought the test went well, but a few days later, she received an email from Dr. Orridge.

“You were flagged by Honorlock,” Dr. Orridge wrote. “After review of your video, you were observed frequently looking down and away from the screen before answering questions.”


She was receiving a zero on the exam, and the matter was being referred to the dean of student affairs. “If you are found responsible for academic dishonesty the grade of zero will remain,” Dr. Orridge wrote.

“This must be a mistake,” the student replied in an email. “I was not being academically dishonest. Looking down does not indicate academic dishonesty.”

‘The word of God’

The New York Times has reviewed the video. Honorlock recordings of several other students are visible briefly in the screen capture, before the teenager’s video is played.

The student and her screen are visible, as is a partial log of time stamps, including at least one red flag, which is meant to indicate highly suspicious behavior, just a minute into her test. As the student begins the exam, at 8:29 a.m., she scrolls through four questions, appearing to look down after reading each one, once for as long as 10 seconds. She shifts slightly. She does not answer any of the questions during the 50-second clip.

 

It’s impossible to say with certainty what is happening in the video. What the artificial intelligence technology got right is that she looked down. But to do what? She could be staring at the table, a smartphone or notes. The video is ambiguous.

When the student met with the dean and Dr. Orridge by video, she said, she told them that she looks down to think, and that she fiddles with her hands to jog her memory. They were not swayed. The student was found “responsible” for “noncompliance with directions,” resulting in a zero on the exam and a warning on her record.

“Who stares at a test the entire time they’re taking a test? That’s ridiculous. That’s not how humans work,” said Cooper Quintin, a technologist at the Electronic Frontier Foundation, a digital rights organization. “Normal behaviors are punished by this software.”

After examining online proctoring software that medical students at Dartmouth College claimed had wrongly flagged them, Mr. Quintin suggested that schools have outside experts review evidence of cheating. The most serious flaw with these systems may be a human one: educators who overreact when artificially intelligent software raises an alert.

“Schools seem to be treating it as the word of God,” Mr. Quintin said. “If the computer says you’re cheating, you must be cheating.”

Tess Mitchell, a spokeswoman for Honorlock, said it was not the company’s role to advise schools on how to deal with behavior flagged by its product.

“In no case do we definitively identify ‘cheaters’ — the final decision and course of action is up to the instructor and school, just as it would be in a classroom setting,” Ms. Mitchell said. “It can be challenging to interpret a student’s actions. That’s why we don’t.”

 

Dr. Orridge did not respond to requests for comment for this article. A spokeswoman from Broward College said she could not discuss the case because of student privacy laws. In an email, she said faculty “exercise their best judgment” about what they see in Honorlock reports. She said a first warning for dishonesty would appear on a student’s record but not have more serious consequences, such as preventing the student from graduating or transferring credits to another institution.

Who decides

Honorlock hasn’t previously disclosed exactly how its artificial intelligence works, but a company spokeswoman revealed that the company performs face detection using Rekognition, an image analysis tool that Amazon started selling in 2016. The Rekognition software looks for facial landmarks — nose, eyes, eyebrows, mouth — and returns a confidence score that what is onscreen is a face. It can also infer the emotional state, gender and angle of the face.

Honorlock will flag a test taker as suspicious if it detects multiple faces in the room, or if the test taker’s face disappears, which could happen when people cover their face with their hands in frustration, said Brandon Smith, Honorlock’s president and chief operating officer.

Honorlock does sometimes use human employees to monitor test takers; “live proctors” will pop in by chat if there is a high number of flags on an exam to find out what is going on. Recently, these proctors discovered that Rekognition was mistakenly registering faces in photos or posters as additional people in the room.

When something like that happens, Honorlock tells Amazon’s engineers. “They take our real data and use it to improve their A.I.,” Mr. Smith said.

Rekognition was supposed to be a step up from what Honorlock had been using. A previous face detection tool from Google was worse at detecting the faces of people with a range of skin tones, Mr. Smith said.

But Rekognition has also been accused of bias. In a series of studies, Joy Buolamwini, a computer researcher and executive director of the Algorithmic Justice League, found that gender classification software, including Rekognition, worked least well on darker-skinned females.

 

Determining a person’s gender is different from detecting or recognizing a face, but Dr. Buolamwini considered her findings a canary in a coal mine. “If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are also completely bias free,” she wrote in 2019.

The Times analyzed images from the student’s Honorlock video with Amazon Rekognition. It was 99.9 percent confident that a face was present and that it was sad, and 59 percent confident that the student was a man.

Dr. Buolamwini said the Florida student’s skin color and gender should be a consideration in her attempts to clear her name, regardless of whether they affected the algorithm’s performance.

“Whether it is technically linked to race or gender, the stigma and presumption placed on students of color can be exacerbated when a machine label feeds into confirmation bias,” Dr. Buolamwini wrote in an email.

The human element

As the pandemic winds down, and test takers can gather in person again, the remote proctoring industry may soon be in lower demand and face far less scrutiny. However, the intense activism around the technology during the pandemic did lead at least one company to make a major change to its product.

ProctorU, an Honorlock competitor, no longer offers an A.I.-only product that flags videos for professors to review.

“The faculty didn’t have the time, training or ability to do it or do it properly,” said Jarrod Morgan, ProctorU’s founder. A review of ProctorU’s internal data found that videos of flagged behavior were opened only 11 percent of the time.

 

All suspicious behavior is now reviewed by one of the company’s approximately 1,300 proctors, most of whom are based abroad in cheaper labor markets. Mr. Morgan said these contractors went through rigorous training, and would “confirm a breach” only if there was solid evidence that a test taker was receiving help. ProctorU administered four million exams last year; in analyzing three million of those tests, it found that over 200,000, or about 7 percent, involved some kind of academic misconduct, according to the company.

The teenager graduated from Broward College this month. She remains distraught at being labeled a cheater and fears it could happen again.

“I try to become like a mannequin during tests now,” she said.
 

Kashmir Hill is a tech reporter based in New York. She writes about the unexpected and sometimes ominous ways technology is changing our lives, particularly when it comes to our privacy. @kashhill"

 

Please visit original article published here:

https://www.nytimes.com/2022/05/27/technology/college-students-cheating-software-honorlock.html 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

The Surveillant University: Remote Proctoring, AI, and Human Rights // Tessa Scassa, Canada Research Chair in Information Law and Policy, University of Ottawa

The Surveillant University: Remote Proctoring, AI, and Human Rights // Tessa Scassa, Canada Research Chair in Information Law and Policy, University of Ottawa | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Please visit link below to access document:

https://www.cjccl.ca/wp-content/uploads/2022/10-Scassa.pdf 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

A teen girl sexually exploited on Snapchat takes on American tech // The Washington Post 

A teen girl sexually exploited on Snapchat takes on American tech // The Washington Post  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

A 16-year-old girl is leading a class-action lawsuit against one of the country's most popular apps — claiming its designers have done almost nothing to prevent the sexual exploitation of girls like her.

 

By Drew Harwell

"She was 12 when he started demanding nude photos, saying she was pretty, that he was her friend. She believed, because they had connected on Snapchat, that her photos and videos would disappear.


Now, at 16, she is leading a class-action lawsuit against an app that has become a mainstay of American teen life — claiming its designers have done almost nothing to prevent the sexual exploitation of girls like her.

 

Her case against Snapchat reveals a haunting story of shame and abuse inside a video-messaging app that has for years flown under lawmakers’ radar, even as it has surpassed 300 million active users and built a reputation as a safe space for young people to trade their most intimate images and thoughts.

 

But it also raises difficult questions about privacy and safety, and it throws a harsh spotlight on the tech industry’s biggest giants, arguing that the systems they depend on to root out sexually abusive images of children are fatally flawed.

 

“There isn’t a kid in the world who doesn’t have this app,” the girl’s mother told The Washington Post, “and yet an adult can be in correspondence with them, manipulating them, over the course of many years, and the company does nothing. How does that happen?”

In the lawsuit, filed Monday in a California federal court, the girl — requesting anonymity as a victim of sexual abuse and referred to only as L.W. — and her mother accuse Snapchat of negligently failing to design a platform that could protect its users from “egregious harm.”

 

The man — an active-duty Marine who was convicted last year of charges related to child pornography and sexual abuse in a military court — saved her Snapchat photos and videos and shared them with others around the Web, a criminal investigation found.

Snapchat’s parent company, Snap, has defended its app’s core features of self-deleting messages and instant video chats as helping young people speak openly about important parts of their lives.

 

In a statement to The Post, the company said it employs “the latest technologies” and develops its own software “to help us find and remove content that exploits or abuses minors.”

“While we cannot comment on active litigation, this is tragic, and we are glad the perpetrator has been caught and convicted,” Snap spokeswoman Rachel Racusen said. “Nothing is more important to us than the safety of our community.”

 

Founded in 2011, the Santa Monica, Calif., company told investors last month that it now has 100 million daily active users in North America, more than double Twitter’s following in the United States, and that it is used by 90 percent of U.S. residents aged 13 to 24 — a group it designated the “Snapchat Generation.”

For every user in North America, the company said, it received about $31 in advertising revenue last year. Now worth nearly $50 billion, the public company has expanded its offerings to include augmented-reality camera glasses and auto-flying selfie drones.

 

But the lawsuit likens Snapchat to a defective product, saying it has focused more on innovations to capture children’s attention than on effective tools to keep them safe.

 

The app relies on “an inherently reactive approach that waits until a child is harmed and places the burden on the child to voluntarily report their own abuse,” the girl’s lawyers wrote. “These tools and policies are more effective in making these companies wealthier than [in] protecting the children and teens who use them.”

Apple and Google are also listed as defendants in the case because of their role in hosting an app, Chitter, that the man had used to distribute the girl’s images. Both companies said they removed the app Wednesday from their stores following questions from The Post.

Apple spokesman Fred Sainz said in a statement that the app had repeatedly broken Apple’s rules around “proper moderation of all user-generated content.” Google spokesman José Castañeda said the company is “deeply committed to fighting online child sexual exploitation” and has invested in techniques to find and remove abusive content. Chitter’s developers did not respond to requests for comment.

 
 
 

The suit seeks at least $5 million in damages and assurances that Snap will invest more in protection. But it could send ripple effects through not just Silicon Valley but Washington, by calling out how the failures of federal lawmakers to pass tech regulation have left the industry to police itself.

“We cannot expect the same companies that benefit from children being harmed to go and protect them,” Juyoun Han, the girl’s attorney, said in a statement. “That’s what the law is for.”

Brian Levine, a professor at the University of Massachusetts at Amherst who studies children’s online safety and digital forensics and is not involved in the litigation, said the legal challenge adds to the evidence that the country’s lack of tech regulation has left young people at risk.

 

“How is it that all of the carmakers and all of the other industries have regulations for child safety, and one of the most important industries in America has next to nothing?” Levine said.

 

“Exploitation results in lifelong victimization for these kids,” and it’s being fostered on online platforms developed by “what are essentially the biggest toymakers in the world, Apple and Google,” he added. “They’re making money off these apps and operating like absentee landlords. … After some point, don’t they bear some responsibility?”

An anti-Facebook

While most social networks focus on a central feed, Snapchat revolves around a user’s inbox of private “snaps” — the photos and videos they exchange with friends, each of which self-destructs after being viewed.

 

The simple concept of vanishing messages has been celebrated as a kind of anti-Facebook, creating a low-stakes refuge where anyone can express themselves as freely as they want without worrying how others might react.

Snapchat, in its early years, was often derided as a “sexting app,” and for some users the label still fits. But its popularity has also solidified it as a more broadly accepted part of digital adolescence — a place for joking, flirting, organizing and working through the joys and awkwardness of teenage life.

 

In the first three months of this year, Snapchat was the seventh-most-downloaded app in the world, installed twice as often as Amazon, Netflix, Twitter or YouTube, estimates from the analytics firm Sensor Tower show. Jennifer Stout, Snap’s vice president of global public policy, told a Senate panel last year that Snapchat was an “antidote” to mainstream social media and its “endless feed of unvetted content.”

 

Snapchat photos, videos and messages are designed to automatically vanish once the recipient sees them or after 24 hours. But Snapchat’s carefree culture has raised fears that it’s made it too easy for young people to share images they may one day regret.


Snapchat allows recipients to save some photos or videos within the app, and it notifies the sender if a recipient tries to capture a photo or video marked for self-deletion. But third-party workarounds are rampant, allowing recipients to capture them undetected.

Parent groups also worry the app is drawing in adults looking to prey on a younger audience. Snap has said it accounts for “the unique sensitivities and considerations of minors” when developing the app, which now bans users younger than 18 from posting publicly in places such as Snap Maps and limits how often children and teens are served up as “Quick Add” friend suggestions in other users’ accounts. The app encourages people to talk with friends they know from real life and only allows someone to communicate with a recipient who has marked them as a friend.

 
 

The company said that it takes fears of child exploitation seriously. In the second half of 2021, the company deleted roughly 5 million pieces of content and nearly 2 million accounts for breaking its rules around sexually explicit content, a transparency report said last month. About 200,000 of those accounts were axed after sharing photos or videos of child sexual abuse.

But Snap representatives have argued they’re limited in their abilities when a user meets someone elsewhere and brings that connection to Snapchat. They’ve also cautioned against more aggressively scanning personal messages, saying it could devastate users’ sense of privacy and trust.


Some of its safeguards, however, are fairly minimal. Snap says users must be 13 or older, but the app, like many other platforms, doesn’t use an age-verification system, so any child who knows how to type a fake birthday can create an account. Snap said it works to identify and delete the accounts of users younger than 13 — and the Children’s Online Privacy Protection Act, or COPPA, bans companies from tracking or targeting users under that age.

 

Snap says its servers delete most photos, videos and messages once both sides have viewed them, and all unopened snaps after 30 days. Snap said it preserves some account information, including reported content, and shares it with law enforcement when legally requested. But it also tells police that much of its content is “permanently deleted and unavailable,” limiting what it can turn over as part of a search warrant or investigation.

 
 

In 2014, the company agreed to settle charges from the Federal Trade Commission alleging Snapchat had deceived users about the “disappearing nature” of their photos and videos, and collected geolocation and contact data from their phones without their knowledge or consent.

Snapchat, the FTC said, had also failed to implement basic safeguards, such as verifying people’s phone numbers. Some users had ended up sending “personal snaps to complete strangers” who had registered with phone numbers that weren’t actually theirs.

A Snapchat representative said at the time that “while we were focused on building, some things didn’t get the attention they could have.” The FTC required the company submit to monitoring from an “independent privacy professional” until 2034.

‘Breaking point’


Like many major tech companies, Snapchat uses automated systems to patrol for sexually exploitative content: PhotoDNA, built in 2009, to scan still images, and CSAI Match, developed by YouTube engineers in 2014, to analyze videos.

The systems work by looking for matches against a database of previously reported sexual-abuse material run by the government-funded National Center for Missing and Exploited Children (NCMEC).

But neither system is built to identify abuse in newly captured photos or videos, even though those have become the primary ways Snapchat and other messaging apps are used today.

When the girl began sending and receiving explicit content in 2018, Snap didn’t scan videos at all. The company started using CSAI Match only in 2020.

In 2019, a team of researchers at Google, the NCMEC and the anti-abuse nonprofit Thorn had argued that even systems like those had reached a “breaking point.” The “exponential growth and the frequency of unique images,” they argued, required a “reimagining” of child-sexual-abuse-imagery defenses away from the blacklist-based systems tech companies had relied on for years.

They urged the companies to use recent advances in facial-detection, image-classification and age-prediction software to automatically flag scenes where a child appears at risk of abuse and alert human investigators for further review.

“Absent new protections, society will be unable to adequately protect victims of child sexual abuse,” the researchers wrote.

Three years later, such systems remain unused. Some similar efforts have also been halted due to criticism they could improperly pry into people’s private conversations or raise the risks of a false match.

 

In September, Apple indefinitely postponed a proposed system — to detect possible sexual-abuse images stored online — following a firestorm that the technology could be misused for surveillance or censorship.

But the company has since released a separate child-safety feature designed to blur out nude photos sent or received in its Messages app. The feature shows underage users a warning that the image is sensitive and lets them choose to view it, block the sender or to message a parent or guardian for help.

Privacy advocates have cautioned that more-rigorous online policing could end up penalizing kids for being kids. They’ve also worried that such concerns could further fuel a moral panic, in which some conservative activists have called for the firings of LGBTQ teachers who discuss gender or sexual orientation with their students, falsely equating it to child abuse.

But the case adds to a growing wave of lawsuits challenging tech companies to take more responsibility for their users’ safety — and arguing that past precedents should no longer apply.

The companies have traditionally argued in court that one law, Section 230 of the Communications Decency Act, should shield them from legal liability related to the content their users post. But lawyers have increasingly argued that the protection should not inoculate the company from punishment for design choices that promoted harmful use.

In one case filed in 2019, the parents of two boys killed when their car smashed into a tree at 113 mph while recording a Snapchat video sued the company, saying its “negligent design” decision to allow users to imprint real-time speedometers on their videos had encouraged reckless driving.

A California judge dismissed the suit, citing Section 230, but a federal appeals court revived the case last year, saying it centered on the “predictable consequences of designing Snapchat in such a way that it allegedly encouraged dangerous behavior.” Snap has since removed the “Speed Filter.” The case is ongoing.

In a separate lawsuit, the mother of an 11-year-old Connecticut girl sued Snap and Instagram parent company Meta this year, alleging she had been routinely pressured by men on the apps to send sexually explicit photos of herself — some of which were later shared around her school. The girl killed herself last summer, the mother said, due in part to her depression and shame from the episode.

Congress has voiced some interest in passing more-robust regulation, with a bipartisan group of senators writing a letter to Snap and dozens of other tech companies in 2019 asking about what proactive steps they had taken to detect and stop online abuse.

But the few proposed tech bills have faced immense criticism, with no guarantee of becoming law. The most notable — the Earn It Act, which was introduced in 2020 and passed a Senate committee vote in February — would open tech companies to more lawsuits over child-sexual-abuse imagery, but technology and civil rights advocates have criticized it as potentially weakening online privacy for everyone.

Some tech experts note that predators can contact children on any communications medium and that there is no simple way to make every app completely safe. Snap’s defenders say applying some traditional safeguards — such as the nudity filters used to screen out pornography around the Web — to personal messages between consenting friends would raise its own privacy concerns.

But some still question why Snap and other tech companies have struggled to design new tools for detecting abuse.

Hany Farid, an image-forensics expert at University of California at Berkeley, who helped develop PhotoDNA, said safety and privacy have for years taken a “back seat to engagement and profits.”

The fact that PhotoDNA, now more than a decade old, remains the industry standard “tells you something about the investment in these technologies,” he said. “The companies are so lethargic in terms of enforcement and thinking about these risks … at the same time, they’re marketing their products to younger and younger kids.”

Farid, who has worked as a paid adviser to Snap on online safety, said that he believes the company could do more but that the problem of child exploitation is industry-wide.

“We don’t treat the harms from technology the same way we treat the harms of romaine lettuce,” he said. “One person dies, and we pull every single head of romaine lettuce out of every store,” yet the children’s exploitation problem is decades old. “Why do we not have spectacular technologies to protect kids online?”

‘I thought this would be a secret’

The girl said the man messaged her randomly one day on Instagram in 2018, just before her 13th birthday. He fawned over her, she said, at a time when she was feeling self-conscious. Then he asked for her Snapchat account.

“Every girl has insecurities,” said the girl, who lives in California. “With me, he fed on those insecurities to boost me up, which built a connection between us. Then he used that connection to pull strings.” The Post does not identify victims of sexual abuse without their permission.

He started asking for photos of her in her underwear, then pressured her to send videos of herself nude, then more explicit videos to match the ones he sent of himself. When she refused, he berated her until she complied, the lawsuit states. He always demanded more.

She blocked him several times, but he messaged her through Instagram or via fake Snapchat accounts until she started talking to him again, the lawyers wrote. Hundreds of photos and videos were exchanged over a three-year span.

She felt ashamed, but she was afraid to tell her parents, the girl told The Post. She also worried what he might do if she stopped. She thought reporting him through Snapchat would do nothing, or that it could lead to her name getting out, the photos following her for the rest of her life.

“I thought this would be a secret,” she said. “That I would just keep this to myself forever.” (Snap officials said users can anonymously report concerning messages or behaviors, and that its “trust and safety” teams respond to most reports within two hours.)

Last spring, she told The Post, she saw some boys at school laughing at nude photos of young girls and realized it could have been her. She built up her confidence over the next week. Then she sat with her mother in her bedroom and told her what had happened.

Her mother told The Post that she had tried to follow the girl’s public social media accounts and saw no red flags. She had known her daughter used Snapchat, like all of her friends, but the app is designed to give no indication of who someone is talking to or what they’ve sent. In the app, when she looked at her daughter’s profile, all she could see was her cartoon avatar.

 

The lawyers cite Snapchat’s privacy policy to show that the app collects troves of data about its users, including their location and who they communicate with — enough, they argue, that Snap should be able to prevent more users from being “exposed to unsafe and unprotected situations.”

 

Stout, the Snap executive, told the Senate Commerce, Science and Transportation Committee’s consumer protection panel in October that the company was building tools to “give parents more oversight without sacrificing privacy,” including letting them see their children’s friends list and who they’re talking to. A company spokesman told The Post those features are slated for release this summer.

 

Thinking back to those years, the mother said she’s devastated. The Snapchat app, she believes, should have known everything, including that her daughter was a young girl. Why did it not flag that her account was sending and receiving so many explicit photos and videos? Why was no one alerted that an older man was constantly messaging her using overtly sexual phrases, telling her things like “lick it up?”

 

After the family called the police, the man was charged with sexual abuse of a child involving indecent exposure as well as the production, distribution and possession of child pornography.

At the time, the man had been a U.S. Marine Corps lance corporal stationed at a military base, according to court-martial records obtained by The Post.

 

As part of the Marine Corps’ criminal investigation, the man was found to have coerced other underage girls into sending sexually explicit videos that he then traded with other accounts on Chitter. The lawsuit cites a number of Apple App Store reviews from users saying the app was rife with “creeps” and “pedophiles” sharing sexual photos of children.

 

The man told investigators he used Snapchat because he knew the “chats will go away.” In October, he was dishonorably discharged and sentenced to seven years in prison, the court-martial records show.

 

The girl said she has suffered from guilt, anxiety and depression after years of quietly enduring the exploitation and has attempted suicide. The pain “is killing me faster than life is killing me,” she said in the suit.

 

Her mother said that the last year has been devastating, and that she worries about teens like her daughter — the funny girl with the messy room, who loves to dance, who wants to study psychology so she can understand how people think.

 

“The criminal gets punished, but the platform doesn’t. It doesn’t make sense,” the mother said. “They’re making billions of dollars on the backs of their victims, and the burden is all on us.”

 

For original post, please visit:

https://www.washingtonpost.com/technology/2022/05/05/snapchat-teens-nudes-lawsuit/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Illuminate Data Breach Impact in Colorado Grows to 7 Districts Plus 1 California District and 3 in Connecticut // THE Journal

Illuminate Data Breach Impact in Colorado Grows to 7 Districts Plus 1 California District and 3 in Connecticut // THE Journal | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Kristal Kuykendall

"The impact of the Illuminate Education data breach that occurred in January continues growing as more K–12 school districts in Colorado and Connecticut and one in California have notified parents that their students, too, had their private information stolen.

Seven school districts in Colorado — with total current enrollment of about 132,000 students — have recently alerted parents that current and former students were impacted in the breach, which Illuminate has said was discovered after it began investigating suspicious access to its systems in early January.

The incident at Illuminate resulted in a week-long outage of all Illuminate’s K–12 school solutions, including IO Classroom (previously named Skedula), PupilPath, EduClimber, IO Education, SchoolCity, and others, according to its service status site. The company’s website states that its software products serve over 5,000 schools nationally with a total enrollment of about 17 million U.S. students.

 

The New York State Education Department last week told THE Journal that 565 schools in the state — including “at least” 1 million current and former students — were among those impacted by the Illuminate data breach, and data privacy officials there opened an investigation on April 1.

The list of all New York schools impacted by the data breach was sent to THE Journal in response to a Freedom of Information request; NYSED officials said the list came from Illuminate. Each impacted district was working to confirm how many current and former students were among those whose data were compromised, and each is required by law to report those totals to NYSED, so the total number of students affected was expected to grow, the department said last week.

Since late April, the following school districts have confirmed in letters to parents or on their websites that current and/or former students were impacted by the data breach:

Colorado Districts Known to be Impacted by Data Breach:

California Districts Known to be Impacted by Data Breach:

Connecticut Districts Known to be Impacted by Data Breach:

  • Coventry Public Schools in Connecticut, current enrollment 1,650; did not specify the total impacted.
  • Pomperaug Regional School District 15, current enrollment about 3,600; said the breach affected students enrolled during 2017–2019 school years; the district ceased using Illuminate Education in 2019.
  • Cheshire Public Schools, current enrollment about 1,500; said the breach affected students enrolled during the 2017–2019 school years.

New York's Investigation of Illuminate Breach

As of last week, 17 local education agencies in New York — 15 districts and two charter school groups — had filed their data breach reports with NYSED showing that 179,377 current and former students had their private data stolen during the incident, according to the document sent to THE Journal. That total does not include the number impacted at NYC Schools, where officials said in late March that about 820,000 current and former students had been impacted by the Illuminate breach.

All but one of the agencies whose data breach reports have been filed with the state said that more students were impacted than were currently enrolled, meaning both current and former students were impacted by the breach. For example, Success Academy Charter Schools, which has nearly 3 dozen schools in its network, reported 55,595 students affected by the breach, while current enrollment is just under 20K."

 

https://thejournal.com/articles/2022/05/12/illuminate-data-breach-impact-in-co-grows-to-7-districts-plus-1-ca-district-and-3-in-ct.aspx?m=1 

 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Roblox Metaverse Playing Games with Consumers: Truth In Advertising files complaint with the FTC concerning deceptive advertising on Roblox // TruthInAdvertising.org

Roblox Metaverse Playing Games with Consumers: Truth In Advertising files complaint with the FTC concerning deceptive advertising on Roblox // TruthInAdvertising.org | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"Roblox, a multibillion-dollar public company based in California, says its mission is to “bring the world together through play.” The Roblox platform is an immersive virtual space consisting of 3D worlds in which users can play games, attend concerts and throw birthday parties, among a host of other activities. With more than 54 million daily users and over 40 million games and experiences, it’s not surprising that in 2021 alone, users from 180 different countries spent more than 40 billion hours in this closed platform metaverse.

But according to an investigation by TINA.org, advertising is being surreptitiously pushed in front of millions of users on Roblox by a multitude of companies and avatar influencers. Such digital deception is possible because Roblox has failed to establish any meaningful guardrails to ensure compliance with truth in advertising laws. As a result, the brands Roblox has invited into its metaverse, including but not limited to DC Entertainment, Hasbro, Hyundai, Mattel, Netflix, NFL Enterprise, Nike and Paramount Game Studios, along with undisclosed avatar brand influencers and AI-controlled brand bots are running roughshod on the platform, manipulating and exploiting consumers, including its most vulnerable players – more than 25 million children.

Roblox community standards dictate that “[a]ds may not contain content intended for users under the age of 13,” presumably because this vulnerable age group, which makes up nearly half of Roblox’s daily users, can’t identify advertisements disguised as games (also known as advergames). In fact, even adults can have trouble accurately identifying advergames, which are found on Roblox in ever-increasing numbers. And as brands exploit unsuspecting consumers, tricking them into taking part in immersive advertising experiences, the companies, including Roblox, are taking users’ time, attention and money while extracting their personal data. And to make matters worse, Roblox lures consumers, including minors, to its platform with atypical earnings representations including claims that users can make millions of dollars as game developers, despite the fact that the vast majority of Roblox game developers will never make any money.

On April 19, TINA.org filed a complaint with the FTC concerning Roblox and a multitude of other companies and sponsored avatar influencers on the platform urging the agency to open an investigation into the deceptive advertising on and by Roblox and take appropriate enforcement action. At a minimum, Roblox needs to stop breaching its own community standards and uphold its promise to parents that it will keep children as safe as possible online by enforcing its own rule prohibiting ads from containing contain intended for users under the age of 13.

In a world…

Advergames or branded worlds are everywhere on Roblox. Or maybe not. It is difficult to say exactly how many there are given the lack of clear and conspicuous disclosures on the platform. Take, for example, the following search results on Roblox for experiences based on the Netflix series “Stranger Things.” It is not at all clear, which, if any, of these experiences are sponsored.

 

Clicking on an individual thumbnail image provides little clarity.

The only indication that the second experience in the above search results – Stranger Things: Starcourt Mall – is an advergame is the small print under the name of the game that says “By Netflix,” which is not likely to be seen by most Roblox users (and even if they do notice this fine-print disclosure, they may not understand what it means).

And while the other experiences in the search results have the brand – Stranger Things – in their name, and brand imagery, none of those games are sponsored. So just because a brand is in the name of a game or experience doesn’t necessarily mean it is an advergame. Indeed, a search for “sports worlds” brings up more than a dozen Vans Worlds, only one of which is sponsored.

 

Additional examples of undisclosed advergames in the Roblox metaverse include Nikeland, which has been visited more than 13 million times since Nike launched the branded world last fall. In Nike’s advertisement, users can “[e]xplore the world of sport, swim in Lake Nike, race your friends on the track [and] discover hidden secrets.” Then there’s Vans World (the sponsored one), which has been visited more than 63 million times since its launch last April, where users “[e]xplore different skate sites” and “[k]it out [their] Avatar in Vans Apparel.” Like with many worlds on Roblox, the apparel will cost you, as it must be purchased using Robux, Roblox’s virtual currency that powers its digital economy and has been crucial to the company’s success. (More on that later.)

Venturing outside their own branded worlds

In addition to creating their own undisclosed advergames, brands have also deceptively infiltrated organic games. For example, in May 2020, to coincide with the release of the studio’s “Scoob!” movie that month, Warner Brothers’ Scooby-Doo brand made a limited promotional appearance in the organic pet-raising game, Adopt Me! which is the most popular game on Roblox of all time with more than 28 billion visits. During the promotional event in the family-friendly game, players could adopt Scooby as a pet and take a spin in the Mystery Machine. However, there was never any discernible disclosure to its audience that this was a sponsored event, nor did it comply with Roblox criteria that ads not be directed at children under the age of 13.

Avatar influencers

Perhaps even more insidious than the use of advergames and sponsored content within organic games is the use of undisclosed avatar influencer marketing. These avatars look and act like any other avatar you might run into on Roblox but these avatars, controlled by paid brand influencers, have a hidden agenda: to promote brands throughout the Roblox metaverse. And this means that there are potentially millions of players seeing, communicating with, and interacting with brand endorsers in the Roblox metaverse without ever knowing it.

For example, one (of at least a dozen) undisclosed Nike avatar influencers was complimented on his Nike gear by another player while in Nikeland, saying in the chat bar “how doyou get the gear,” “that nike hat is drippy,” while another player spotted the popular avatar in Nikeland and wrote, “TW dessi??? omgomg.”

 

In addition to these avatar influencers (which, besides Nike, are used by numerous other brands including Vans, Hyundai and Forever 21) are Roblox’s army of more than 670 influencers, known as Roblox Video Stars. Roblox Video Stars are Roblox users who have large followings on social media and who Roblox has invited to participate in its influencer program in which the company rewards the Stars with a number of benefits, including free Roblox Premium memberships, early access to certain Roblox events and features, and the ability to earn commissions on Robux sales to users. And while Roblox requires the Stars to disclose their material connection to the platform in their social media posts, it does not require Stars to disclose their material connection to Roblox while on the platform itself even though the law requires such disclosure when brand influencers are interacting with users within the platform’s ecosystem.

 

Brands are also using undisclosed AI-controlled avatars in advergames to promote consumer engagement and spending, among other things. In the Hot Wheels Open World (an undisclosed Mattel advergame), for example, AI bots urge players to upgrade their cars using virtual currency. While in the NASCAR showroom in Jailbreak, a popular organic game with a cops and robbers theme, which hosted an undisclosed sponsored event by NASCAR in February, AI bots let players know that NASCAR was giving away a car for free. In Nikeland, there were even AI bots modeled after real life NBA stars Giannis Antetokounmpo and LeBron James, each of which were giving away Nike gear to players. While Antetokounmpo tweeted to more than 2 million followers and posted to more than 12 million Instagram fans late last year that they should “[c]ome find me” in Nikeland because he was giving away “free gifts,” it appears that neither Antetokounmpo nor James ever controlled their avatars in Nikeland – rather, the look-a-like avatars interacting with other users were simply AI-controlled agents of Nike. In none of these examples did the brands inform users that they were seeing and interacting with AI-controlled brand avatars.

 

 

In its complaint letter to the FTC, TINA.org reiterated its position that consumers have a right to know when they are interacting with bots that are used by brands in their advertisements. In fact, wherever endorsements take place, advertisers must fulfill their duty to ensure that the form, content and disclosure used by any influencer, at a minimum, complies with the law. Even in the metaverse, companies are legally responsible for ensuring that consumers, whatever their age may be, know that what they are viewing or interacting with is an endorsement. And despite the transitory nature of avatar influencers participating as walking and talking endorsements within the Roblox metaverse, no brand (including Roblox) should ignore its legal obligation to disclose these endorsements. Indeed, earlier this year, Roblox and many other companies, including Nike, Hyundai, VF Corp. (which owns Vans) and Mattel (which owns Hot Wheels), were formally reminded by the FTC that material connections between endorsers and brands must be clearly and conspicuously disclosed in a manner that will be easily understood by the intended audience. Now, after receiving this notice, violations can carry with them penalties of up to $46,517 per violation.

Vulnerable consumers

Of the platform’s more than 50 million daily users, more than 25 million are children aged 13 and under, an age group that generally cannot identify advertisements disguised as games, of which there is an ever-increasing number on Roblox as brands have been eager to enter the Roblox ecosystem. Roblox community standards dictate that “[a]ds may not contain content intended for users under the age of 13.” But the reality is many of these advergames are aimed at precisely this age group.

In fact, even adults can have trouble accurately identifying undisclosed advergames. But rather than requiring brands to follow the law and clearly and conspicuously disclose advergames, avatar influencers and other promotional content as marketing – so that consumers aren’t tricked into taking part in immersive advertising experiences without their knowledge – Roblox has failed to establish any meaningful guardrails to ensure compliance with truth in advertising laws. Instead, it has generally abdicated this responsibility to its developers and brands.

Strike it rich on Roblox? Think again

Excerpt from Roblox webpage (August 2, 2021)

As of December 2020, there were more than 8 million active developers on Roblox. One of the ways that Roblox persuades these developers (which include minors) to create games for free is by deceptively representing that games can earn them real world cash despite the fact that the typical Roblox developer earns no money.

“More and more, our developers and creators are starting to make a living on Roblox,” Roblox Founder and CEO Dave Baszucki said on a Roblox Investor Day call in February 2021, adding:


"What used to be a hobby has become a job for an individual person … Developers have enjoyed meaningful earnings expansion over time. … People [are] starting to fund their college education. … [This is] an amazing economic opportunity … We can imagine developers making $100 million a year and more."

But the reality is that it is incredibly difficult for a developer to get their game noticed on Roblox and begin earning cash (which is why some Roblox developers – and brands – have resorted to buying likes to enhance their game’s visibility on the platform, a deceptive tactic that Roblox says it does not permit but apparently does not adequately monitor or prevent). The numbers don’t lie: Only 0.1 percent of Roblox developers and creators gross $10,000 or more and only 0.02 percent gross $100,000 or more annually, according to a Roblox SEC filing.

Robux: Roblox’s virtual currency

Whether users come to Roblox as developers or players, there is one thing they both need in order to maximize their experience: Robux. Robux, which can be purchased with U.S. dollars, are used by players to buy accessories and clothing for their avatar, special abilities within games, access to pay-to-play games and Roblox Premium subscriptions, among other things. Robux can be purchased in various increments, from 400 for $4.99 to 10,000 for $99.99.

And, as Roblox likes to advertise, Robux can also be earned by creators and developers in a variety of ways, including creating and selling accessories and clothes for avatars; selling special abilities within games; driving engagement, meaning that developers are rewarded by Roblox for the amount of time Premium subscribers spend in their games; and selling content and tools, such as plugins, to other developers. Not only are Robux hard to earn, but for every dollar a user spends on something developers have created, developers get, on average, 28 cents. And to make matters worse, the exchange rate for earned Robux is 0.0035 USD per Robux, meaning that earned Robux are worth nearly 300 percent less than purchased Robux.

In addition, unlike other metaverse platforms, Roblox virtual items and its currency are not created or secured using blockchain technology, which means Roblox objects are not NFTs (non-fungible tokens) and Robux is not a cryptocurrency. As a result, when a Roblox user loses their account for whatever reason, they also lose every asset that was in the account, an occurrence that appears to happen with some frequency according to consumer complaints filed against Roblox with the FTC. (While the FTC said it has received nearly 1,300 consumer complaints against Roblox, the agency only provided TINA.org with a sampling of 200 complaints in response to its FOIA request, citing FTC policy. TINA.org has appealed the decision in order to gain access to all 1,291 of the complaints.)

Action cannot wait

Roblox, one of the largest gaming companies in the world, and the brands it has invited into its metaverse are actively exploiting and manipulating its users, including millions of children who cannot tell the difference between advertising and organic content, for their own financial gain. The result is that kids and other consumers are spending an enormous amount of money, attention and time on the platform. The FTC must act now, before an entire generation of minors is harmed by the numerous forms of deceptive advertising occurring on the Roblox platform."

 

For original post, please visit: 

https://truthinadvertising.org/articles/roblox-metaverse-playing-games-with-consumers/ 

No comment yet.