SpaceX has tested the emergency chutes that could save astronauts’ lives during a launch at the Kennedy Space Center in Florida.
A new video shared by the company on X-formerly-Twitter shows a person dressed in a black and white SpaceX spacesuit zipping down from the tower’s crew pad inside a tube of red and white fabric, an equally exhilarating and terrifying ordeal — especially considering the threat of an exploding rocket right behind you.
“Even though it’s meant to be used for emergencies, it looks like a lot of fun!” SpaceX CEO Elon Musk commented in a tweet.
In this episode, recorded during the 2024 Abundance360 Summit, Peter and Elon discuss super-intelligence, the future of AI, Neuralink, and more.
Elon Musk is a businessman, founder, investor, and CEO. He co-founded PayPal, Neuralink and OpenAI; founded SpaceX, and is the CEO of Tesla and the Chairman of X.
The static fire test comes less than two weeks after the last Starship mission, which saw the rocket reach orbital velocity for the first time before breaking up upon reentry to Earth’s atmosphere.
Gwynne Shotwell, SpaceX’s chief operating officer, said last week that the next launch attempt could take place in the “beginning part of May”, though no payload will be onboard.
The fourth major flight test of the fully stacked Starship rocket system will instead aim to resolve the issues that arose during the last mission.
VEENDAM, Netherlands (AP) — A 420-meter (quarter-mile) white steel tube running alongside a railway line in the windswept northern Netherlands could usher in a new era in the transportation of people and freight.
The tube is the heart of the new European Hyperloop Center that opens Tuesday and will be a proving ground in coming years for developers of the evolving technology.
Hyperloop, once trumpeted by Elon Musk, involves capsules floating on magnetic fields zipping at speeds of arund 700 kph (435 mph) through low-pressure tubes. Its advocates tout it as far more efficient than short haul flights, high-speed rail and freight trucks.
The first human recipient of a Neuralink brain implant has shared new details on his recovery and experience of living with the experimental assistive tech, which has allowed him a greater level of freedom and autonomy, including the ability to pull an all-nighter playing Sid Meier’s Civilization 6.
Neuralink co-founder Elon Musk took to X/Twitter in January to reveal that the company had implanted its first brain-computer interface in the head of a human patient, who was “recovering well” following the surgery. The billionaire also hinted at the time that the implant was functioning well and had detected a “promising neuron spike”. In a subsequent February update, Musk commented that the unnamed patient had seemingly made a full recovery, and was even able to use the implant to manipulate a computer cursor with thought alone.
Finally, on March 20, Neuralink posted its own update to X in the form of a nine-minute livestream in which 29-year-old implant recipient Noland Arbaugh used the technology to play a digital version of chess, while discussing how living with the experimental aide had changed his life.
The term “artificial general intelligence” (AGI) has become ubiquitous in current discourse around AI. OpenAI states that its mission is “to ensure that artificial general intelligence benefits all of humanity.” DeepMind’s company vision statement notes that “artificial general intelligence…has the potential to drive one of the greatest transformations in history.” AGI is mentioned prominently in the UK government’s National AI Strategy and in US government AI documents. Microsoft researchers recently claimed evidence of “sparks of AGI” in the large language model GPT-4, and current and former Google executives proclaimed that “AGI is already here.” The question of whether GPT-4 is an “AGI algorithm” is at the center of a lawsuit filed by Elon Musk against OpenAI.
Given the pervasiveness of AGI talk in business, government, and the media, one could not be blamed for assuming that the meaning of the term is established and agreed upon. However, the opposite is true: What AGI means, or whether it means anything coherent at all, is hotly debated in the AI community. And the meaning and likely consequences of AGI have become more than just an academic dispute over an arcane term. The world’s biggest tech companies and entire governments are making important decisions on the basis of what they think AGI will entail. But a deep dive into speculations about AGI reveals that many AI practitioners have starkly different views on the nature of intelligence than do those who study human and animal cognition—differences that matter for understanding the present and predicting the likely future of machine intelligence.