Menu

Blog

Archive for the ‘information science’ category: Page 11

Aug 25, 2024

The testing of AI in medicine is a mess. Here’s how it should be done

Posted by in categories: biotech/medical, information science, robotics/AI

Hundreds of medical algorithms have been approved on basis of limited clinical data. Scientists are debating who should test these tools and how best to do it.

Aug 24, 2024

The Ethics, Challenges, and Future of Whole Brain Emulation & AGI | Deep Interview with Randal Koene

Posted by in categories: blockchains, ethics, information science, neuroscience, robotics/AI, singularity

Join Randal Koene, a computational neuroscientist, as he dives into the intricate world of whole brain emulation and mind uploading, while touching on the ethical pillars of AI. In this episode, Koene discusses the importance of equal access to AI, data ownership, and the ethical impact of AI development. He explains the potential future of AGI, how current social and political systems might influence it, and touches on the scientific and philosophical aspects of creating a substrate-independent mind. Koene also elaborates on the differences between human cognition and artificial neural networks, the challenge of translating brain structure to function, and efforts to accelerate neuroscience research through structured challenges.

00:00 Introduction to Randal Koene and Whole Brain Emulation.
00:39 Ethical Considerations in AI Development.
02:20 Challenges of Equal Access and Data Ownership.
03:40 Impact of AGI on Society and Development.
05:58 Understanding Mind Uploading.
06:39 Randall’s Journey into Computational Neuroscience.
08:14 Scientific and Philosophical Aspects of Substrate Independent Minds.
13:07 Brain Function and Memory Processes.
25:34 Whole Brain Emulation: Current Techniques and Challenges.
32:12 The Future of Neuroscience and AI Collaboration.

Continue reading “The Ethics, Challenges, and Future of Whole Brain Emulation & AGI | Deep Interview with Randal Koene” »

Aug 23, 2024

Researchers propose a smaller, more noise-tolerant quantum factoring circuit for cryptography

Posted by in categories: computing, encryption, information science, quantum physics

The most recent email you sent was likely encrypted using a tried-and-true method that relies on the idea that even the fastest computer would be unable to efficiently break a gigantic number into factors.

Quantum computers, on the other hand, promise to rapidly crack complex cryptographic systems that a classical computer might never be able to unravel. This promise is based on a quantum factoring proposed in 1994 by Peter Shor, who is now a professor at MIT.

But while researchers have taken great strides in the last 30 years, scientists have yet to build a quantum computer powerful enough to run Shor’s algorithm.

Aug 23, 2024

The circle of life, publish or perish edition: Two journals retract more than 40 papers

Posted by in categories: information science, robotics/AI

The team has released the width-pruned version of the model on Hugging Face under the Nvidia Open Model License, which allows for commercial use. This makes it accessible to a wider range of users and developers who can benefit from its efficiency and performance.

“Pruning and classical knowledge distillation is a highly cost-effective method to progressively obtain LLMs [large language models] of smaller size, achieving superior accuracy compared to training from scratch across all domains,” the researchers wrote. “It serves as a more effective and data-efficient approach compared to either synthetic-data-style fine-tuning or pretraining from scratch.”

This work is a reminder of the value and importance of the open-source community to the progress of AI. Pruning and distillation are part of a wider body of research that is enabling companies to optimize and customize LLMs at a fraction of the normal cost. Other notable works in the field include Sakana AI’s evolutionary model-merging algorithm, which makes it possible to assemble parts of different models to combine their strengths without the need for expensive training resources.

Aug 23, 2024

Techno-futurists are selling an interplanetary paradise for the posthuman generation—they just forgot about the rest of us

Posted by in categories: computing, information science

Inside the cult of TESCREALism and the dangerous fantasies of Silicon Valley’s self-appointed demigods, for Document’s Spring/Summer 2024 issue.

As legend has it, Steve Jobs once asked Larry Kenyon, an engineer tasked with developing the Mac computer, to reduce its boot time by 10 seconds. Kenyon said that was impossible. “What if it would save a person’s life?” Jobs asked. Then, he went to a whiteboard and laid out an equation: If 5 million users spent an additional 10 seconds waiting for the computer to start, the total hours wasted would be equivalent to 100 human lifetimes every year. Kenyon shaved 28 seconds off the boot time in a matter of weeks.

Often cited as an example of the late CEO’s “reality distortion field,” this anecdote illustrates the combination of charisma, hyperbole, and marketing with which Jobs convinced his disciples to believe almost anything—elevating himself to divine status and creating “a cult of personality for capitalists,” as Mark Cohen put it in an article about his death for the Australian Broadcasting Corporation. In helping to push the myth of the genius tech founder into the cultural mainstream, Jobs laid the groundwork for future generations of Silicon Valley investors and entrepreneurs who have, amid the global decline of organized religion, become our secular messiahs. They preach from the mounts of Google and Meta, selling the public on digital technology’s saving grace, its righteous ability to reshape the world.

Aug 22, 2024

Google/vizier: Python-based research interface for blackbox and hyperparameter optimization, based on the internal Google Vizier Service

Posted by in category: information science

The gaussian process bandit algorithm.

How does Google optimize its research and systems?


Python-based research interface for blackbox and hyperparameter optimization, based on the internal Google Service. — google/vizier.

Aug 22, 2024

Did AI Just Pass the Turing Test?

Posted by in categories: humor, information science, robotics/AI

A recent study by UC San Diego researchers brings fresh insight into the ever-evolving capabilities of AI. The authors looked at the degree to which several prominent AI models, GPT-4, GPT-3.5, and the classic ELIZA could convincingly mimic human conversation, an application of the so-called Turing test for identifying when a computer program has reached human-level intelligence.

The results were telling: In a five-minute text-based conversation, GPT-4 was mistakenly identified as human 54 percent of the time, contrasted with ELIZA’s 22 percent. These findings not only highlight the strides AI has made but also underscore the nuanced challenges of distinguishing human intelligence from algorithmic mimicry.

Continue reading “Did AI Just Pass the Turing Test?” »

Aug 22, 2024

Fast and robust analog in-memory deep neural network training

Posted by in categories: information science, robotics/AI

Analog in-memory computing recent hardware implementations focused mainly on accelerating inference deployment. In this work, to improve the training process, the authors propose algorithms for supervised training of deep neural networks on analog in-memory AI accelerator hardware.

Aug 21, 2024

Can massive particles be seen as soliton solutions?

Posted by in categories: information science, particle physics

I wonder if the common relativistic wave equations contain a sort of soliton solutions, which might be considered as particle localisations.

Aug 16, 2024

9.523: Aspects of a Computational Theory of Intelligence

Posted by in categories: information science, neuroscience, robotics/AI

The problem of intelligence — its nature, how it is produced by the brain and how it could be replicated in machines — is a deep and fundamental problem that cuts across multiple scientific disciplines. Philosophers have studied intelligence for centuries, but it is only in the last several decades that developments in science and engineering have made questions such as these approachable: How does the mind process sensory information to produce intelligent behavior, and how can we design intelligent computer algorithms that behave similarly? What is the structure and form of human knowledge — how is it stored, represented, and organized? How do human minds arise through processes of evolution, development, and learning? How are the domains of language, perception, social cognition, planning, and motor control combined and integrated? Are there common principles of learning, prediction, decision, or planning that span across these domains?

This course explores these questions with an approach that integrates cognitive science, which studies the mind; neuroscience, which studies the brain; and computer science and artificial intelligence, which study the computations needed to develop intelligent machines. Faculty and postdoctoral associates affiliated with the Center for Brains, Minds and Machines discuss current research on these questions.

Page 11 of 322First89101112131415Last