Menu

Blog

Archive for the ‘law’ category: Page 34

Jun 14, 2022

South Korean factories are rushing to replace humans with robots

Posted by in categories: law, robotics/AI

In January, a law came into effect in South Korea called the Serious Disasters Punishment Act. The new regulation states that if workers die or sustain serious injuries during work, courts could fine the CEO or high-ranking managers of the firms or even send them to jail.

An increase in robot investments

This event has spurred an increase in investment in robots in the nation, according to a report by Rest of the World published on June 6.

Jun 9, 2022

“Firehose” of raw data: Twitter agrees to provide complete ‘Fake Accounts’ data to Elon Musk

Posted by in categories: cybercrime/malcode, Elon Musk, law, robotics/AI

According to multiple news reports, Twitter plans to give Elon Musk access to its “firehose” of raw data on hundreds of millions of daily tweets in an effort to speed up the Tesla billionaire’s $44 billion acquisition of the social media platform. The data-sharing agreement was not confirmed by the lawyers involved in the deal. Musk was silent on Twitter, despite having previously expressed his displeasure with various aspects of the deal.

Twitter declined to comment on the reports, pointing to a statement released on Monday in which the company stated that it is continuing to “cooperate” and share information with Musk, who in April entered into a legally binding agreement to purchase Twitter, claims that the transaction cannot go forward until the firm discloses more information on the frequency of bogus accounts on its network. He claims, without providing evidence, that Twitter has grossly underestimated the number of “spam bots” on its platform, which are automated accounts that typically promote scams and misinformation.

On Monday, the Attorney General of the State of Texas, Ken Paxton, said that his office will be investigating “possible false reporting” of bot activity on Twitter as part of an inquiry against Twitter for allegedly failing to disclose the scale of its spam bot and fake account activity. According to a source familiar with the situation, Twitter’s plan to give Musk full access to the firehose was first reported by the Washington Post. According to other reports, the billionaire may only have limited access.

Jun 7, 2022

Musk will drop Twitter deal if data on bots not provided

Posted by in categories: cybercrime/malcode, Elon Musk, law, robotics/AI

In a new letter, Elon Musk threatens to walk away from $44 Billion Twitter deal if the management doesn’t provide more data on total bot counts.

According to a letter sent by Elon Musk’s legal team to Twitter, “Twitter refused to provide the information that Mr. Musk has repeatedly requested since May 9, 2022 to facilitate his evaluation of spam and fake accounts on the company’s platform” and “It’s effort to characterize it otherwise is merely an attempt to obfuscate and confuse the issue”.

The letter also reminded that Musk does not believe the company’s lax testing methodologies are adequate so he must conduct his own analysis and “The data he has requested is necessary to do so”. The letter also said “Mr. Musk is entitled to seek, and Twitter is obligated to provide information and data”.

Jun 4, 2022

Genetic paparazzi are right around the corner, and courts aren’t ready to confront the legal quagmire of DNA theft

Posted by in categories: biotech/medical, genetics, law

Both Macron and Madonna have expressed concerns about genetic privacy. As DNA collection and sequencing becomes increasingly commonplace, what may seem paranoid may instead be prescient.

Jun 3, 2022

New York just passed a bill cracking down on bitcoin mining — here’s everything that’s in it

Posted by in categories: bitcoin, blockchains, cryptocurrencies, law, security

Following an early morning vote in Albany on Friday, lawmakers in New York passed a bill to ban certain bitcoin mining operations that run on carbon-based power sources. The measure now heads to the desk of Governor Kathy Hochul, who could sign it into law or veto it.

If Hochul signs the bill, it would make New York the first state in the country to ban blockchain technology infrastructure, according to Perianne Boring, founder and president of the Chamber of Digital Commerce. Industry insiders also tell CNBC it could have a domino effect across the U.S., which is currently at the forefront of the global bitcoin mining industry, accounting for 38% of the world’s miners.

The New York bill, which previously passed the State Assembly in late April before heading to the State Senate, calls for a two-year moratorium on certain cryptocurrency mining operations which use proof-of-work authentication methods to validate blockchain transactions. Proof-of-work mining, which requires sophisticated gear and a whole lot of electricity, is used to create bitcoin. Ethereum is switching to a less energy-intensive process, but will still use this method for at least for another few months.

Jun 1, 2022

Who’s liable for AI-generated lies?

Posted by in categories: law, robotics/AI

**Who will be liable** for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

Indeed, OpenAI is concerned enough about the risks of its models going “totally off the rails,” as its documentation puts it at one point (in reference to a response example in which an abusive customer input is met with a very troll-esque AI reply), to offer a free content filter that “aims to detect generated text that could be sensitive or unsafe coming from the API” — and to recommend that users don’t return any generated text that the filter deems “unsafe.” (To be clear, its documentation defines “unsafe” to mean “the text contains profane language, prejudiced or hateful language, something that could be NSFW or text that portrays certain groups/people in a harmful manner.”).

But, given the novel nature of the technology, there are no clear legal requirements that content filters must be applied. So OpenAI is either acting out of concern to avoid its models causing generative harms to people — and/or reputational concern — because if the technology gets associated with instant toxicity that could derail development. will be liable for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

May 30, 2022

Elon Musk versus the Woke Cartel

Posted by in categories: Elon Musk, governance, government, law, neuroscience, sustainability, transhumanism

Many criticisms have been leveled against Elon Musk—that he’s part of the elite, that Tesla has been the beneficiary of government handouts and exemptions, that his transhumanist Neuralink is a brain-data-mining operation. Yet his planned purchase of Twitter, his supposed free-speech absolutism, and his subsequent renunciation of the Democratic Party as “the party of hate” have put Musk squarely in the crosshairs of the woke cartel.

Vitriolic Twitter storms, a New York Times-Financial Times biographical exposé, a slew of hit pieces and scaremongering segments in the legacy media, and allegations of sexual harassment have dogged the automobile magnate ever since his Twitter bid. In response, Musk announced on Twitter that he’s assembling a legal crew to sue defamers and defend Tesla (and likely himself) against lawsuits.

But the best indication that the woke cartel has really gone berserk is its removal of Tesla from the S&P 500’s ESG (Environmental, Social, and Governance) Index. This last rebuff proves that “ESG is a scam.”

May 30, 2022

Artificial intelligence is breaking patent law

Posted by in categories: geopolitics, law, robotics/AI, treaties

The patent system assumes that inventors are human. Inventions devised by machines require their own intellectual property law and an international treaty.

May 26, 2022

Ian Bremmer on NATO Expansion and the Opportunity for American Unity | Amanpour and Company

Posted by in categories: governance, law

In the last part of this interview, Ian talks about the lack of a global legal / governance framework to deal with accelerating technologies.


At the World Economic Forum in Davos today, the president of Switzerland warned of a world in the throes of multiple crises. This also is the subject of a new book by political scientist Ian Bremmer. In “The Power of Crisis: How Three Threats – and Our Response – Will Change the World,” Bremmer looks at how we can better prepare for the global challenges ahead, as he explains to Walter Isaacson.

Continue reading “Ian Bremmer on NATO Expansion and the Opportunity for American Unity | Amanpour and Company” »

May 24, 2022

How Americans think about artificial intelligence

Posted by in categories: employment, food, health, law, robotics/AI, transportation

Artificial intelligence (AI) is spreading through society into some of the most important sectors of people’s lives – from health care and legal services to agriculture and transportation.1 As Americans watch this proliferation, they are worried in some ways and excited in others.

In broad strokes, a larger share of Americans say they are “more concerned than excited” by the increased use of AI in daily life than say the opposite. Nearly half of U.S. adults (45%) say they are equally concerned and excited. Asked to explain in their own words what concerns them most about AI, some of those who are more concerned than excited cite their worries about potential loss of jobs, privacy considerations and the prospect that AI’s ascent might surpass human skills – and others say it will lead to a loss of human connection, be misused or be relied on too much.

But others are “more excited than concerned,” and they mention such things as the societal improvements they hope will emerge, the time savings and efficiencies AI can bring to daily life and the ways in which AI systems might be helpful and safer at work. And people have mixed views on whether three specific AI applications are good or bad for society at large.

Page 34 of 92First3132333435363738Last