Toggle light / dark theme

s future, predicting a $10 trillion valuation driven by the launch of the Optimus robot and full self-driving technology, alongside ambitious plans for a Robo taxi service and significant production growth +# ## Key Insights.

S vertically integrated supply chains and in-house development of Optimus components make it difficult for competitors to replicate their success in robotics and AI. + Economic Impact.

S most valuable company, worth more than the next 5 largest companies combined, primarily due to autonomous vehicles and robots. + ⏰Autonomous Tesla vehicles are expected to increase car utility by 5x, operating 55 hours/week instead of the typical 10 hours, enabling 24/7 ride-hailing and delivery services.

Safety and Technology.

S Full Self-Driving (FSD) technology is reported to be 8 times safer per mile than human driving, with continuous improvements and updates. + s real-world AI for self-driving is so advanced that Musk jokes competitors would need a telescope to see them, highlighting Tesla Energy and Infrastructure.

S energy storage solutions are becoming increasingly important, potentially doubling the grid Future Outlook.

The world of AI is evolving at a breakneck pace with new models constantly being created. With so much rapid innovation, it is essential to have the flexibility to quickly adapt applications to the latest models. This is where Azure Container Apps serverless GPUs come in.

Azure Container Apps is a managed serverless container platform that enables you to deploy and run containerized applications while reducing infrastructure management and saving costs.

With serverless GPU support, you get the flexibility to bring any containerized workload, including new language models, and deploy them to a platform that automatically scales with your customer demand. In addition, you get optimized cold start, per-second billing and reduced operational overhead to allow you to focus on the core components of your applications when using GPUs. All the while, you can run your AI applications alongside your non-AI apps on the same platform, within the same environment, which shares networking, observability, and security capabilities.

OpenAI, the company behind ChatGPT, says it has proof that the Chinese start-up DeepSeek used its technology to create a competing artificial intelligence model — fueling concerns about intellectual property theft in the fast-growing industry.

OpenAI believes DeepSeek, which was founded by math whiz Liang Wenfeng, used a process called “distillation,” which helps make smaller AI models perform better by learning from larger ones.

While this is common in AI development, OpenAI says DeepSeek may have broken its rules by using the technique to create its own AI system.

While DeepSeek makes AI cheaper, seemingly without cutting corners on quality, a group is trying to figure out how to make tests for AI models that are hard enough. It’s ‘Humanity’s Last Exam’

If you’re looking for a new reason to be nervous about artificial intelligence, try this: Some of the smartest humans in the world are struggling to create tests that AI systems can’t pass.

For years, AI systems were measured by giving new models a variety of standardized benchmark tests. Many of these tests consisted of challenging, SAT-calibre problems in areas like math, science and logic. Comparing the models’ scores over time served as a rough measure of AI progress.

Researchers from Zhejiang University and HKUST (Guangzhou) have developed a cutting-edge AI model, ProtET, that leverages multi-modal learning to enable controllable protein editing through text-based instructions. This innovative approach, published in Health Data Science, bridges the gap between biological language and protein sequence manipulation, enhancing functional protein design across domains like enzyme activity, stability, and antibody binding.

Proteins are the cornerstone of biological functions, and their precise modification holds immense potential for medical therapies, , and biotechnology. While traditional protein editing methods rely on labor-intensive laboratory experiments and single-task optimization models, ProtET introduces a transformer-structured encoder architecture and a hierarchical training paradigm. This model aligns protein sequences with natural language descriptions using contrastive learning, enabling intuitive, text-guided protein modifications.

The research team, led by Mingze Yin from Zhejiang University and Jintai Chen from HKUST (Guangzhou), trained ProtET on a dataset of over 67 million protein–biotext pairs, extracted from Swiss-Prot and TrEMBL databases. The model demonstrated exceptional performance across key benchmarks, improving protein stability by up to 16.9% and optimizing catalytic activities and antibody-specific binding.

A team of investigators from Dana-Farber Cancer Institute, The Broad Institute of MIT and Harvard, Google, and Columbia University have created an artificial intelligence model that can predict which genes are expressed in any type of human cell. The model, called EpiBERT, was inspired by BERT, a deep learning model designed to understand and generate human-like language.

The work appears in Cell Genomics.

Every cell in the body has the same , so the difference between two types of cells is not the genes in the genome, but which genes are turned on, when, and how many. Approximately 20% of the genome codes for determine which genes are turned on, but very little is known about where those codes are in the genome, what their instructions look like, or how mutations affect function in a cell.