Toggle light / dark theme

Quantum information (QI) processing may be the next game changer in the evolution of technology, by providing unprecedented computational capabilities, security and detection sensitivities. Qubits, the basic hardware element for quantum information, are the building block for quantum computers and quantum information processing, but there is still much debate on which types of qubits are actually the best.

Research and development in this field is growing at astonishing paces to see which system or platform outruns the other. To mention a few, platforms as diverse as superconducting Josephson junctions, trapped ions, topological qubits, ultra-cold neutral atoms, or even diamond vacancies constitute the zoo of possibilities to make qubits.

So far, only a handful of platforms have been demonstrated to have the potential for quantum computing, marking the checklist of high-fidelity controlled gates, easy qubit-qubit coupling, and good isolation from the environment, which means sufficiently long-lived coherence.

Services will be provided through Azure Government Cloud to ensure data security.

Government agencies in the U.S. will now have access to OpenAI’s artificial intelligence (AI) models, such as GPT-4 and its predecessor after Microsoft announced that it would offer Azure OpenAI services to the government as well.

OpenAI’s GPT-4 is the powerhouse behind Microsoft’s new Bing search engine and a hot favorite among companies looking to leverage AI to make better use of their data. As per Microsoft’s claims, its Azure OpenAI services, launched only in January this year, serve more than 4,500 customers.

Google announced the general availability (GA) of generative AI services based on Vertex AI, the Machine Learning Platform as a Service (ML PaaS) offering from Google Cloud. With the service becoming GA, enterprises and organizations could integrate the platform’s capabilities with their applications.

With this update, developers can use several new tools and models, such as the world completion model driven by PaLM 2, the Embeddings API for text, and other foundation models in the Model Garden. They can also leverage the tools available within the Generative AI Studio to fine-tune and deploy customized models. Google claims that enterprise-grade data governance, security, and safety features are also built into the Vertex AI platform. This provides confidence to customers in consuming the foundation models, customizing them with their own data, and building generative AI applications.

Customers can use the Model Garden to access and evaluate base models from Google and its partners. There are over 60 models, with pals for adding newer models in the future. Also, the Codey model for code completion, code generation, and chat, announced at the Google I/O conference in May, is now available for public preview.

Google’s aiming to make it easier to use and secure passwords — at least, for users of the Password Manager tool built into its Chrome browser.

Today, the tech giant announced that Password Manager, which generates unique passwords and autofills them across platforms, will soon gain biometric authentication on PC. (Android and iOS have had biometric authentication for some time.) When enabled, it’ll require an additional layer of security, like fingerprint recognition or facial recognition, before Chrome autofills passwords.

Exactly which types of biometrics are available in Password Manager on desktop will depend on the hardware attached to the PC, of course (e.g. a fingerprint reader), as well as whether the PC’s operating system supports it. Beyond “soon,” Google didn’t say when to expect the feature to arrive.

Possibly a move to freeze and stall the tec, like the bio ethics clowns who were able to freeze bio tec. But, China wouldnt sign on to any freeze, thankfully. And the tec has already spread across 3rd world countries.


WASHINGTON, June 6 (Reuters) — Senate Majority Leader Chuck Schumer said on Tuesday he has scheduled three briefings for senators on artificial intelligence, including the first classified briefing on the topic.

In a letter to colleagues on Tuesday, the Democratic leader said senators need to deepen their understanding of artificial intelligence.

“AI is already changing our world, and experts have repeatedly told us that it will have a profound impact on everything from our national security to our classrooms to our workforce, including potentially significant job displacement,” Schumer said.

A raft of industry experts have given their views on the likely impact of artificial intelligence on humanity in the future. The responses are unsurprisingly mixed.

The Guardian has released an interesting article regarding the potential socioeconomic and political impact of the ever-increasing rollout of artificial intelligence (AI) on society. By asking various experts in the field on the subject, the responses were, not surprisingly, a mixed bag of doom, gloom, and hope.


Yucelyilmaz/iStock.

“I don’t think the worry is of AI turning evil or AI having some kind of malevolent desire,” Jessica Newman, director of University of California Berkeley’s Artificial Intelligence Security Initiative, told the Guardian. “The danger is from something much more simple, which is that people may program AI to do harmful things, or we end up causing harm by integrating inherently inaccurate AI systems into more and more domains of society,” she added.

A team of security researchers at Georgia Tech, the University of Michigan and Ruhr University Bochum in Germany has reported a new form of side-channel attack that capitalizes on power and speed management methods used by graphics processing units and systems on a chip (SoCs).

The researchers demonstrated how they could steal by targeting data released by the Dynamic Voltage and Frequency Scaling (DVFS) mechanisms found on most modern chips.

As manufacturers race to develop thinner and more energy-efficient devices, they must train their sights on constructing SoCs that balance power consumption, heat generation and processing speed.

SAN FRANCISCO, June 1 (Reuters) — Hackers have stolen data from the systems of a number of users of the popular file transfer tool MOVEit Transfer, U.S. security researchers said on Thursday, one day after the maker of the software disclosed that a security flaw had been discovered.

Software maker Progress Software Corp (PRGS.O), after disclosing the vulnerability on Wednesday, said it could lead to potential unauthorized access into users’ systems.

The managed file transfer software made by the Burlington, Massachusetts-based company allows organizations to transfer files and data between business partners and customers.

Fun thought experiment today:


What if today’s ultra-wealthy— the Musks, Bezoses, and Zuckerbergs of the world— decided to demonstrate the true extent of what AI can do today? What if money were no object? Let’s think about some ambitious, albeit costly, applications of current AI technologies that are already within our grasp.

Personal Protection Army

In the realm of personal security, AI-driven technology offers a tantalizing glimpse into the future. Imagine a scenario where, instead of a cadre of bodyguards, a personal drone swarm follows you around, providing an unprecedented level of safety and protection.