Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Correlation of Chest CT and RT-PCR Testing for Coronavirus ...



best artificial intelligence companies :: Article Creator

Artificial Intelligence In Manufacturing: Four Use Cases You Need To Know In 2023

In 2023, Artificial Intelligence (AI) is becoming increasingly essential to the day-to-day operations of manufacturers all over the world. Autonomous robots and machine learning-powered predictive analytics means companies are able to streamline processes, increase productivity and reduce the damage done to the environment in many new ways.

Artificial Intelligence In Manufacturing: Four Use Cases You Need To Know In 2023

Adobe Stock

Importantly, rather than replacing human workers, a priority for many organizations is doing this in a way that augments human abilities and enables us to work more safely and efficiently.

Today, the concept of AI technology in factories goes far beyond the robot-filled workplaces that have been a feature of industries since the 1960s to encompass smart, connected manufacturing plants where humans and machines work together, and data and analytics enable better predictions and decision-making at every stage of the process. So let's take a look at some of the most interesting use cases for AI in manufacturing in 2023:

Cobots

Robots have been used to automate manual tasks in factories and manufacturing plants for decades, but cobots are a relatively new development. What makes them different is that they are designed to work alongside humans in a safe way while augmenting our abilities with their own.

One big advantage of cobots over traditional industrial robots is that they are cheaper to operate as they don't need their own dedicated space in which to function. This means they can safely work on a regular plant floor without the need for protective cages or segregation from humans. They can pick components, carry out manufacturing operations like screwing, sanding, and polishing, and operate conventional manufacturing machinery like injection molding and stamping presses. They can also carry out quality control inspections using computer vision-enabled cameras.

Cobots are widely used by automotive manufacturers, including BMW and Ford, where they perform tasks including gluing and welding, greasing camshafts, injecting oil into engines, and performing quality control inspections.

And consumer goods manufacturers, including giant Procter & Gamble, use cobots to streamline their manufacturing processes, engaging in tasks such as assembling and packaging products while maintaining the required high standards of hygiene.

AI in Additive Manufacturing

Often known as 3D printing, the term additive manufacturing is used because it includes any manufacturing process where products and objects are built up, layer by layer. This differentiates it from more traditional, subtractive manufacturing processes where a product or component is made by cutting away at a block of material.

AI plays an important role in additive manufacturing by optimizing the way materials are dispensed and applied, as well as optimizing the design of complex products (see Generative Design below). It can also be used to spot and correct errors made by 3D printing technology in real-time.

Additive manufacturing equipment manufacturer Markforged has developed a tool called Blacksmith that uses AI to compare product designs with actual finished products and automate fine-tuning of the manufacturing process in order to bring them more closely into line.

Technology like this will be of benefit to manufacturers such as footwear giants Adidas and Reebok, which are now using 3D printing technology to create complex lattice structures for more comfortable and performance-enhancing running shoes.

Generative Design

Generative design is a bit like the generative AI we've seen in technologies like ChatGPT or Dall-E, except instead of telling it to create text or images, we tell it to design products.

Designers simply enter parameters such as what materials should be used, the size and weight of the desired product, what manufacturing methods will be used, and how much it should cost, and the generative design algorithms spit out blueprints and instructions.

Design engineers in the manufacturing industry can use this method to create a wide selection of design options for new products they want to create and then pick and choose the best ones to put into production. In this way, it accelerates product development processes while enabling innovation in design.

Generative design is particularly powerful when it comes to conceptualizing what can be done with new additive manufacturing processes, such as 3D printing, due to the complexity of the shapes and structures that can be created.

It has been used to create new types of components that are cheaper, lighter, and sturdier than existing components, improving the overall qualities of many products from cars and aircraft to prefabricated houses and structures.

Predictive Maintenance

Manufacturers use AI to analyze data from sensors and machinery on the factory floor in order to understand how and when failures and breakdowns are likely to occur. This means that they can ensure that resources and spare parts necessary for repair will be on hand to ensure a quick fix. It also means they can more accurately predict the amount of downtime that can be expected in a particular process or operation and account for this in their scheduling and logistical planning. Data from vibrations, thermal imaging, operating efficiency, and analysis of oils and liquids in machinery can all be processed via machine learning algorithms for vital insights into the health of manufacturing machinery.

Some examples of this in practice include Pepsi and Colgate, which both use technology designed by AI startup Augury to detect problems with manufacturing machinery before they cause breakdowns.

The Lights-Out Factory

A lights-out factory is a smart factory that's capable of operating entirely autonomously without any humans on site. Although mostly theoretical, there are some examples in existence already – such as the factory operated by Japanese robotics manufacturer FANUC without humans since 2001, which is capable of operating without human supervision for periods of up to 30 days.

Electronics manufacturer Philips also operates a factory in the Netherlands that makes electric razors, where a total of nine human members of staff are required on site at any time. This is a trend that we can expect to see other companies working towards adopting as time goes by as technology becomes increasingly efficient and affordable. Using a robots-only workforce means a factory can potentially operate 24/7 with no need for human intervention, potentially leading to big benefits when it comes to output and efficiency. Of course, questions will need to be addressed about what the impact removing humans from the manufacturing workforce will have on wider society.

To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to my newsletter, follow me on Twitter, LinkedIn, and YouTube, and check out my books Future Skills: The 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society.


Age Of AI: Everything You Need To Know About Artificial Intelligence

AI is appearing in seemingly every corner of modern life, from music and media to business and productivity, even dating. There's so much it can be hard to keep up — so read on to find out everything from the latest big developments to the terms and companies you need to know in order to stay current in this fast-moving field.

To begin with, let's just make sure we're all on the same page: what is AI?

Artificial intelligence, also called machine learning, is a kind of software system based on neural networks, a technique that was actually pioneered decades ago but very recently has blossomed thanks to powerful new computing resources. AI has enabled effective voice and image recognition, as well as the ability to generate synthetic imagery and speech. And researchers are hard at work making it possible for an AI to browse the web, book tickets, tweak recipes and more.

Oh, but if you're worried about a Matrix-type rise of the machines — don't be. We'll talk about that later!

Our guide to AI has three main parts, each of which we will update regularly and can be read in any order:

  • First, the most fundamental concepts you need to know as well as more recently important ones.
  • Next, an overview of the major players in AI and why they matter.
  • And last, a curated list of the recent headlines and developments that you should be aware of.
  • By the end of this article you'll be about as up to date as anyone can hope to be these days. We will also be updating and expanding it as we press further into the age of AI.

    AI 101

    Image Credits: Andrii Shyp / Getty Images

    One of the wild things about AI is that although the core concepts date back more than 50 years, few of them were familiar to even the tech-savvy before very recently. So if you feel lost, don't worry — everyone is.

    And one thing we want to make clear up front: Although it's called "artificial intelligence," that term is a little misleading. There's no one definition of intelligence out there, but what these systems do is definitely closer to calculators than brains. The input and output of this calculator is just a lot more flexible. You might think of artificial intelligence like artificial coconut — it's imitation intelligence.

    With that said, here are the basic terms you'll find in any discussion of AI.

    Neural network

    Our brains are largely made of interconnected cells called neurons, which mesh together to form complex networks that perform tasks and store information. Recreating this amazing system in software has been attempted since the '60s, but the processing power required wasn't widely available until 15-20 years ago, when GPUs let digitally defined neural networks flourish. At their heart they are just lots of dots and lines: the dots are data and the lines are statistical relationships between those values. As in the brain, this can create a versatile system that quickly takes an input, passes it through the network and produces an output. This system is called a model.

    Model

    The model is the actual collection of code that accepts inputs and returns outputs. The similarity in terminology to a statistical model or a modeling system that simulates a complex natural process is not accidental. In AI, model can refer to a complete system like ChatGPT, or pretty much any AI or machine learning construct, whatever it does or produces. Models come in various sizes, meaning both how much storage space they take up and how much computational power they take to run. And these depend on how the model is trained.

    Training

    To create an AI model, the neural networks making up the base of the system are exposed to a bunch of information in what's called a dataset or corpus. In doing so, these giant networks create a statistical representation of that data. This training process is the most computation-intensive part, meaning it takes weeks or months (you can kind of go as long as you want) on huge banks of high-powered computers. The reason for this is that not only are the networks complex, but datasets can be extremely large: billions of words or images that must be analyzed and given representation in the giant statistical model. On the other hand, once the model is done cooking it can be much smaller and less demanding when it's being used, a process called inference.

    Image Credits: Google

    Inference

    When the model is actually doing its job, we call that inference, very much the traditional sense of the word: stating a conclusion by reasoning about available evidence. Of course it is not exactly "reasoning," but statistically connecting the dots in the data it has ingested and, in effect, predicting the next dot. For instance, saying "Complete the following sequence: red, orange, yellow…" it would find that these words correspond to the beginning of a list it has ingested, the colors of the rainbow, and infers the next item until it has produced the rest of that list. Inference is generally much less computationally costly than training: Think of it like looking through a card catalog rather than assembling it. Big models still have to run on supercomputers and GPUs, but smaller ones can be run on a smartphone or something even simpler.

    Generative AI

    Everyone is talking about generative AI, and this broad term just means an AI model that produces an original output, like an image or text. Some AIs summarize, some reorganize, some identify, and so on — but an AI that actually generates something (whether or not it "creates" is arguable) is especially popular right now. Just remember that just because an AI generated something, that doesn't mean it is correct, or even that it reflects reality at all! Only that it didn't exist before you asked for it, like a story or painting.

    Today's top terms

    Beyond the basics, here are the AI terms that are most relevant in mid-2023.

    Large language model

    The most influential and versatile form of AI available today, large language models are trained on pretty much all the text making up the web and much of English literature. Ingesting all this results in a foundation model (read on) of enormous size. LLMs are able to converse and answer questions in natural language and imitate a variety of styles and types of written documents, as demonstrated by the likes of ChatGPT, Claude and LLaMa. While these models are undeniably impressive, it must be kept in mind that they are still pattern recognition engines, and when they answer it is an attempt to complete a pattern it has identified, whether or not that pattern reflects reality. LLMs frequently hallucinate in their answers, which we will come to shortly.

    If you want to learn more about LLMs and ChatGPT, we have a whole separate article on those!

    Foundation model

    Training a huge model from scratch on huge datasets is costly and complex, and so you don't want to have to do it any more than you have to. Foundation models are the big from-scratch ones that need supercomputers to run, but they can be trimmed down to fit in smaller containers, usually by reducing the number of parameters. You can think of those as the total dots the model has to work with, and these days it can be in the millions, billions or even trillions.

    Fine tuning

    A foundation model like GPT-4 is smart, but it's also a generalist by design — it absorbed everything from Dickens to Wittgenstein to the rules of Dungeons & Dragons, but none of that is helpful if you want it to help you write a cover letter for your resumé. Fortunately, models can be fine tuned by giving them a bit of extra training using a specialized dataset, for instance a few thousand job applications that happen to be laying around. This gives the model a much better sense of how to help you in that domain without throwing away the general knowledge it has collected from the rest of its training data.

    Reinforcement learning from human feedback, or RLHF, is a special kind of fine tuning you'll hear about a lot — it uses data from humans interacting with the LLM to improve its communication skills.

    Diffusion

    From a paper on an advanced post-diffusion technique, you can see how an image can be reproduced from even very noisy data. Image Credits: OpenAI

    Image generation can be done in numerous ways, but by far the most successful as of today is diffusion, which is the technique at the heart of Stable Diffusion, Midjourney and other popular generative AIs. Diffusion models are trained by showing them images that are gradually degraded by adding digital noise until there is nothing left of the original. By observing this, diffusion models learn to do the process in reverse as well, gradually adding detail to pure noise in order to form an arbitrarily defined image. We're already starting to move beyond this for images, but the technique is reliable and relatively well understood, so don't expect it to disappear any time soon.

    Hallucination

    Originally this was a problem of certain imagery in training slipping into unrelated output, such as buildings that seemed to be made of dogs due to an an over-prevalence of dogs in the training set. Now an AI is said to be hallucinating when, because it has insufficient or conflicting data in its training set, it just makes something up.

    This can be either an asset or a liability; an AI asked to create original or even derivative art is hallucinating its output; an LLM can be told to write a love poem in the style of Yogi Berra, and it will happily do so — despite such a thing not existing anywhere in its dataset. But it can be an issue when a factual answer is desired; models will confidently present a response that is half real, half hallucination. At present there is no easy way to tell which is which except checking for yourself, because the model itself doesn't actually know what is "true" or "false," it is only trying to complete a pattern as best it can.

    AGI or strong AI

    Artificial General Intelligence, or strong AI, is not really a well-defined concept, but the simplest explanation is that it is an intelligence that is powerful enough not just to do what people do, but learn and improve itself like we do. Some worry that this cycle of learning, integrating those ideas, and then learning and growing faster will be a self-perpetuating one that results in a super-intelligent system that is impossible to restrain or control. Some have even proposed delaying or limiting research to forestall this possibility.

    It's a scary idea, sure, and movies like "The Matrix" and "Terminator" have explored what might happen if AI spirals out of control and attempts to eliminate or enslave humanity. But these stories are not grounded in reality. The appearance of intelligence we see in things like ChatGPT is an impressive act, but has little in common with the abstract reasoning and dynamic multi-domain activity that we associate with "real" intelligence. While it's near-impossible to predict how things will progress, it may be helpful to think of AGI as something like interstellar space travel: We all understand the concept and are seemingly working toward it, but at the same time we're incredibly far from achieving anything like it. And due to the immense resources and fundamental scientific advances required, no one is going to just suddenly accomplish it by accident!

    AGI is interesting to think about, but there's no sense borrowing trouble when, as commentators point out, AI is already presenting real and consequential threats today despite, and in fact largely due to, its limitations. No one wants Skynet, but you don't need a superintelligence armed with nukes to cause real harm: people are losing jobs and falling for hoaxes today. If we can't solve those problems, what chance do we have against a T-1000?

    Top players in AI OpenAI

    Image Credits: Leon Neal / Getty Images

    If there's a household name in AI, it's this one. OpenAI began, as its name suggests, as an organization intending to perform research and provide the results more or less openly. It has since restructured as a more traditional for-profit company providing access to its advances in language models like ChatGPT through APIs and apps. It's headed by Sam Altman, a technotopian billionaire who nonetheless has warned of the risks AI could present. OpenAI is the acknowledged leader in LLMs but also performs research in other areas.

    Microsoft

    As you might expect, Microsoft has done its fair share of work in AI research, but like other companies has more or less failed to turn its experiments into major products. Its smartest move was to invest early in OpenAI, which scored it an exclusive long-term partnership with the company, which now powers its Bing conversational agent. Though its own contributions are smaller and less immediately applicable, the company does have a considerable research presence.

    Google

    Known for its moonshots, Google somehow missed the boat on AI despite its researchers literally inventing the technique that led directly to today's AI explosion: the transformer. Now it's working hard on its own LLMs and other agents, but is clearly playing catch-up after spending most of its time and money over the last decade boosting the outdated "virtual assistant" concept of AI. CEO Sundar Pichai has repeatedly said that the company is aligning itself firmly behind AI in search and productivity.

    Anthropic

    After OpenAI pivoted away from openness, siblings Dario and Daniela Amodei left it to start Anthropic, intending to fill the role of an open and ethically considerate AI research organization. With the amount of cash they have on hand, they're a serious rival to OpenAI even if their models, like Claude, aren't as popular or well-known yet.

    Stability

    Image Credits: Bryce Durbin / TechCrunch

    Controversial but inevitable, Stability represents the "do what thou wilt" open source school of AI implementation, hoovering up everything on the internet and making the generative AI models it trains freely available if you have the hardware to run it. This is very in line with the "information wants to be free" philosophy but has also accelerated ethically dubious projects like generating pornographic imagery and using intellectual property without consent (sometimes at the same time).

    Elon Musk

    Not one to be left out, Musk has been outspoken about his fears regarding out-of-control AI, as well as some sour grapes after he contributed to OpenAI early on and it went in a direction he didn't like. While Musk is not an expert on this topic, as usual his antics and commentary do provoke widespread responses (he was a signatory on the above-mentioned "AI pause" letter) and he is attempting to start a research outfit of his own.

    Latest stories in AI OpenAI makes GPT-4 available

    Starting on July 6, all existing OpenAI API developers can access GPT-4 if they have a "history of successful payments." The company plans to open up access to new developers by the end of July and begin raising availability limits after that "depending on compute availability."

    Starting January 4, 2024, certain older OpenAI models — specifically GPT-3 and its derivatives — will no longer be available, and will be replaced with new "base GPT-3" models. Developers using the old models will have to manually upgrade their integrations by January 4, and those who wish to continue using fine-tuned old models beyond January 4 will need to fine-tune replacements atop the new base GPT-3 models.

    European tech leaders sign open letter warning against over regulation of AI in draft EU laws

    The open letter states that AI offers the "chance to rejoin the technological avant-garde" but that current regulatory proposals at the EU level could tip-over into stifling the opportunities.

    Inflection lands $1.3B investment to build more 'personal' AI

    Inflection AI, an AI startup aiming to create "personal AI for everyone," has closed a $1.3 billion funding round led by Microsoft, Reid Hoffman, Bill Gates, Eric Schmidt and new investor Nvidia. CEO Mustafa Suleyman, who previously co-founded the Google-owned AI lab DeepMind, says that the new capital will support Inflection's work to build and design its first product, an AI-powered assistant called Pi.

    China might further lose chip access in new US ban

    The U.S. Department of Commerce could prohibit shipments of chips from manufacturers including Nvidia to customers in China as soon as early next month (July).

    The latest move to weigh additional restrictions on AI chip export to China is part of the U.S.'s broader strategy to limit China's progress in AI, particularly in the military sphere. However, these measures are also having an adverse impact on the commercial AI sector in China, where many firms operate with teams that span both the U.S. And China.

    ChatGPT uses Bing and Bing uses ChatGPT

    ChatGPT Plus subscribers can now access a new feature on the ChatGPT app called Browsing to have ChatGPT search Bing for answers to prompts or questions. OpenAI says that the Browsing feature is particularly useful for queries relating to current events and other information that "extend[s] beyond [ChatGPT's] original training data." When Browsing is disabled, ChatGPT's knowledge cuts off in 2021.

    AI can't win a Grammy

    If a musician's AI-assisted composition is to be eligible for a Grammy, they'll need to make sure that their human contribution is "meaningful and more than de minimis," the rules now state. An update to Grammy awards' eligibility criteria states that "[o]nly human creators are eligible to be submitted for consideration," and that "[a] work that contains no human authorship is not eligible in any Categories."

    Google-owned research lab DeepMind claims its next chatbot will rival ChatGPT

    DeepMind is using techniques from AlphaGo, DeepMind's AI system that was the first to defeat a professional human player at the board game Go, to make a ChatGPT-rivaling chatbot called Gemini. If all goes according to plan, Gemini will have the ability to plan or solve problems as well as analyze text, DeepMind CEO Demis Hassabis told Wired's Will Knight.

    Salesforce pledges to invest $500M in AI startups

    Salesforce announced that it's growing its Generative AI Fund from $250 million in size to $500 million. The Generative AI fund has already invested in several firms on the frontier of generative AI tech since launching in March. While far from the only fund investing primarily in generative AI, Salesforce aims to differentiate its tranche by prioritizing what it describes as "ethical" AI technologies.

    Nvidia becomes a trillion-dollar company

    GPU maker Nvidia was doing fine selling to gamers and cryptocurrency miners, but the AI industry put demand for its hardware into overdrive. The company has cleverly capitalized on this and the other day broke the symbolic (but intensely so) trillion-dollar market cap when its stock hit $413. They show no sign of slowing down, as they showed recently at Computex…

    At Computex, Nvidia redoubles commitment to AI

    Among a dozen or two announcements at Computex in Taipei, Nvidia CEO Jensen Huang talked up the company's Grace Hopper superchip for accelerated computing (their terminology) and demoed generative AI that it claimed could turn anyone into a developer.

    OpenAI's Sam Altman lobbies the world on AI's behalf

    Altman was recently advising the U.S. Government on AI policy, though some saw this as letting the fox set the rules of the henhouse. The EU's various rulemaking bodies are also looking for input and Altman has been doing a grand tour, warning simultaneously against excessive regulation and the dangers of unfettered AI. If these perspectives seem opposed to you… don't worry, you're not the only one.

    Anthropic raises $450 million for its new generation of AI models

    We kind of spoiled this news for them when we published details of this fundraise and plan ahead of them, but Anthropic is now officially $450 million richer and hard at work on the successor to Claude and its other models. It's clear the AI market is large enough that there's room at the top for a few major providers — if they have the capital to get there.

    TikTok is testing its own in-app AI called Tako

    Video social networking platform TikTok is testing a new conversational AI that you can ask about whatever you want, including what you're watching. The idea is instead of just searching for more "husky howling" videos, you could ask Tako "why do huskies howl so much?" and it will give a useful answer as well as point you toward more content to watch.

    Microsoft is baking ChatGPT into Windows 11

    After investing hundreds of millions into OpenAI, Microsoft is determined to get its money's worth. It's already integrated GPT-4 into its Bing search platform, but now that Bing chat experience will be available — indeed, probably unavoidable — on every Windows 11 machine via an right-side bar across the OS.

    Google adds a sprinkle of AI to just about everything it does

    Google is playing catch-up in the AI world, and although it is dedicating considerable resources to doing so, its strategy is still a little murky. Case in point: its I/O 2023 event was full of experimental features that may or may not ever make it to a broad audience. But they're definitely doing a full court press to get back in the game.


    Explained: Why Tech Giants Are Shifting Funding From Metaverse To Artificial Intelligence

    Back in 2021, Metaverse was a popular buzzword, with multiple media and technology companies entering the space with large investments. 

    Several technology giants sought to capture parts of the metaverse value chain, including Google, Apple, Facebook, and several others. 

    Facebook even changed its name to Meta to highlight its meta-verse ambitions. However, the environment has changed drastically over the last two years, especially with the recent rise of artificial intelligence, which offers more immediate benefits for companies and shareholders.

    Tech Giants Are Shifting Focus From Metaverse to AI

    Meta, whose arm Reality Labs is working on developing its Metaverse, has been cutting jobs, including cuts at Zuckerberg's pet project Reality Labs. The company had changed its name just a year and a half earlier, and its metaverse arm was burnt through nearly $ 14 billion last year. However, the company appears to have changed tack, with artificial intelligence now being the " single largest investment", according to Meta's recent investor conference calls. 

    The company's recent call was also more focused on the artificial intelligence aspect compared to previous calls, with Zuckerberg mentioning the words 'artificial intelligence' more than 25 times. In comparison, the 'metaverse' was mentioned less than ten times. His language, too, was peppered with several mentions of driving increased efficiency rather than about growing the metaverse business. 

    The company's app, Horizon Worlds, has not enjoyed any of the success enjoyed by the other apps owned by the company and has performed even worse than other less deep-pocketed "online worlds". 

    The shift is not limited to Meta but has been seen across the board at companies like Disney, Google, Microsoft, and others. 

    Microsoft, for instance, shut down its virtual reality platform and laid-off its industrial Metaverse team, and appears to have dropped its focus on the Metaverse for now. 

    A few months later, after scaling down its metaverse-focused business, Microsoft stepped up its focus on AI with a $ 10 billion investment in open AI. Other non-tech companies, like Disney that had set up teams focused on Metaverse opportunities have laid off entire teams. 

    At the peak of the boom, reports by prominent banks had estimated the Metaverse market size would reach $ 1 trillion within a decade. However, these estimates look far off in the current scenario, where profitability is valued higher than unprofitable growth.

    Why is the Focus Shifting to AI?

    The shifting of funds from the Metaverse to AI is mainly due to the immediacy of returns from AI, compared to the Metaverse. The latter is still in its infancy with several bottlenecks, such as the cost of the equipment that customers would need to buy, investments on the technology side, and interoperability issues, among others. 

    Public market investors have been selling off unprofitable businesses or excessively focused on growth rather than profitability. 

    Venture capitalists, too, are more gung-ho about the AI sector, with investments in the sector jumping fivefold between 2020 and 2022. On the other hand, the technology for artificial intelligence is relatively advanced and offers scope for relatively more immediate application for both internal and external purposes. 

    "The two major technological waves driving our road map are AI today and, over the longer term, the Metaverse," Zuckerberg recently said. 

    Companies have already found immediate applications for AI, which is expected to help boost productivity.

    It Isn't Over for the Metaverse Yet

    While some companies may have shunned the Metaverse for more immediate rewards, there are several fronts on which the Metaverse space is progressing. 

    Firstly, Zuckerberg's idea of a metaverse seems to be quite different from the idea that the average crypto-currency enthusiast has regarding the Metaverse. It is unlikely that his Metaverse would be decentralized and would have much to do with blockchain, as is often cited. 

    Unfortunately, the Metaverse's intention of improving the digital experience was sidelined by the obsession to force crypto and blockchain into the ecosystem. With the hype around blockchain-based Metaverse dying away, the focus can return to creating high-quality virtual spaces. 

    Secondly, the virtual and augmented reality space is growing steadily, with companies trying to find the best technology to make wearables lightweight while enhancing the user experience. The price of such technology is quite high currently, putting it out of reach for a major part of the population – and a lack of access to the right technology would mean that a widely populated metaverse remains a dream. 

    Further, while Microsoft has shut down its industrial Metaverse, analysts estimate that the industrial metaverse space is doing quite well, especially on the digital twin project side. In addition, the Metaverse is ultimately a confluence of several different technological trends. Even if it doesn't succeed as a whole, it can help fast-track the growth of the technologies in the process.








    This post first appeared on Autonomous AI, please read the originial post: here

    Share the post

    Correlation of Chest CT and RT-PCR Testing for Coronavirus ...

    ×

    Subscribe to Autonomous Ai

    Get updates delivered right to your inbox!

    Thank you for your subscription

    ×