Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AI know what you did last summer. | by Matt Barrie | Aug, 2023



new developments in computer graphics :: Article Creator

New Assistant Professor Uses Computer Graphics To Better Understand World

Things that can be difficult to observe with the naked eye are often better explored through the lens of computer graphics.

Bo Zhu has used computer graphics to bridge the gap between the physical and the virtual, allowing for the possibility of scientific experiments that would otherwise be impossible.

Zhu spent five years at Dartmouth College and earned an NSF CAREER Award for his research in 2022. He will continue to explore the endless possibilities created by computer graphics and computational physics as he starts this fall as an assistant professor in the School of Interactive Computing.

Zhu's focus has been the simulation of fluids, from drops of dew gliding across strands of spider silk to soap bubbles bursting into droplets. He has published numerous papers accepted into the Association of Computing Machinery (ACM)'s SIGGRAPH conference, the International Conference on Learning Representations (ICLR), and the Conference on Neural Information Processing Systems (NeurIPS).

As he begins at Georgia Tech, Zhu said he looks to have his lab pivot toward building high-fidelity, artificial intelligence (AI) physics simulation algorithms with science, design, and health applications. For example, by combining mathematical models, physics simulations, and real-world data, Zhu and his team will explore the complex physical processes of the human respiratory system and track the flow of pathogen-carrying droplets.

"We develop computational tools to help scientists explore fundamental problems related to complex physical systems," Zhu said. "We wanted to build computational tools to reproduce these processes based on first principles and to explore their different possibilities. These experiments are expensive or impractical to carry out in the real world, so we use computers to automate this exploration."

Top photo: Assistant Professor Bo Zhu joins the faculty of the School of Interactive Computing for the fall 2023 semester. Courtesy photo by Eli Burakian/Dartmouth College. Above: A sample of simulated fluid computer graphics created by Zhu. Courtesy photo provided by Bo Zhu.

What interests you about working at Georgia Tech?

What captivates me about working at Georgia Tech is the opportunity to collaborate with exceptional colleagues and students across the School of Interactive Computing, the College of Computing, and the broader Institute. Because of the interdisciplinary nature of my research, being part of a community where individual groups strive to address common scientific problems aligns perfectly with my career goals. I am incredibly excited to participate in this process. 

What will your research consist of?

My research undertakings at Tech will revolve around developing computational methodologies tailored for exploring intricate physical systems, with specific applications spanning visual computing, scientific computing, and human health domains. Central to my research vision is a commitment to modeling physical systems and natural phenomena characterized by their multifaceted geometric and dynamic attributes. 

What inspired you to pursue this field of research?

I draw my inspiration from the intricate and captivating nature of our physical world. As a computer scientist, the constant desire to reproduce this beauty and complexity within a virtual realm through computer code motivates me.

Our collective fascination with the visual allure and mathematical intricacy of multifaceted natural phenomena serves as a potent motivation. This motivation stems from their significant roles in addressing scientific and health-related challenges. It fuels our pursuit of forging computer algorithms that can authentically replicate and systematically study these intricate processes within a virtual, computational environment.

What do you hope to accomplish in your research?

My overarching aspiration for my research is to create computational tools that cater to the broader scientific community, facilitating their endeavors to tackle fundamental and socially pertinent scientific challenges. Given the inherently visual nature of our research, I also aim for our computing platforms to enhance connectivity with the public and captivate their interest in these significant scientific issues.

What are you looking forward to about teaching your students and how do you plan to work with them?

I eagerly anticipate the opportunity to engage with Georgia Tech students through the dual avenues of instructing them in cutting-edge courses in computer graphics and scientific computing, as well as collaborating closely with them to tackle intriguing research challenges.

At the core of my approach to research mentoring lies the philosophy that "interest is the best teacher." This guiding principle underscores my commitment to providing personalized support to every student, facilitating the development of their passions and skills in scientific research. 


Siggraph 2023 Highlights New Graphics Technologies — And Missed Opportunities

Wearing the Flamera prototype headset at Siggraph 2023

Rob Squires

In my previous post, I focused on the slew of announcements that Nvidia made as it dominated this year's Siggraph conference. In this post, I want to take the time to showcase some of the other companies and devices that made an impression on me during the event. I'll close with some observations about why this year, for the first time ever, I left the world's premier annual graphics conference feeling a little disappointed by the show.

AMD's W7600 Professional Workstation Graphics Card

Anshel Sag

AMD's New Professional Workstation Cards

Not to be left out of the conversation by Nvidia, AMD announced a pair of new graphics cards a few days before the show to help further expand its line of professional graphics cards. AMD announced the Radeon Pro W7600 and W7500 mid-range cards to complement the already available W7900 and W7800 high-end models.

The W7500 and W7600 are targeted towards the largest part of the professional graphics market, with a single-slot design and $429 and $599 price tags, respectively. What makes these cards exceptional is that they offer a single-slot design with very low power consumption. The W7600 require only a single 6-pin power connector, while the W7500 draws only 70 watts, meaning that it doesn't need any additional power via connector and can harness the full 75 watts supplied by a computer's PCIe slot.

In addition to these new GPUs, AMD also had system integrator Silverdraft at their booth showing off a workstation with seven W7800 GPUs and an AMD Ryzen Threadripper Pro W5996WX 64-core CPU. In fact, AMD's booth was very much a marriage of the company's wildly successful CPU business with its latest professional workstation graphics cards easily running all kinds of professional workflows. This included a Dell Precision 7865 running the DaVinci Resolve video-editing application using the latest and greatest 4K reference monitors from EIZO.

Meta's Prototype Headsets

Siggraph would not be complete without interesting research and development devices from some of the world's leading organizations, whether that's a university, the U.S. Government or, in this case, a company investing heavily in the XR space like Meta. This year, Meta demonstrated two prototype headsets that utilize the latest in optics and display technologies.

Meta's Butterscotch Varifocal headset

Anshel Sag

The first headset, codenamed "Butterscotch Varifocal," is a combination of technology developed for the half-dome varifocal prototype headset shown in 2015 and a retinal-resolution VR display that the company debuted in 2022. This headset also had windows cut into the sides to easily show how the varifocal nature of the headset works as it moves the display towards and away from the users' eyes depending on the virtual object that the user is focusing on.

I found the headset to be very impressive both for its high resolution and for its varifocal experience, especially considering how difficult this is to accomplish in most headsets today. The Meta engineers even included a great toggle for turning varifocal mode on and off, which really made you appreciate the ability to change focus. That said, the focus system definitely could be faster and more responsive, which I hope is something they work on for future versions. While it remains unclear when or if either of these technologies will ever make it into a shippable consumer headset, it is quite clear that Meta continues to innovate and explore ways to make VR better for its millions of users.

Meta's Flamera flat composition camera headset

Anshel Sag

Moving on from VR, Meta also demonstrated the Flamera flat composition camera headset—one of the most interesting-looking headsets I have ever seen. Meta designed this headset to showcase some of its latest pass-through optics technologies, which create a more realistic and higher-resolution AR experience. Pass-through is short for camera pass-through, which uses outside-facing cameras that provide a real-time view of the external world inside a closed headset to create an AR-like mixed reality experience.

Meta claims that its Flamera computational camera uses light-field technology for distortion-free, perspective-correct MR pass-through. Meta also claims that it has patched together dozens of sensors to create a realistic reproduction of the real world. While it did look good, the headset ran quite hot for many people and it had only one focal point, which to me defeats the purpose of using light-field technology. Meta also said that it chose to use waveguides inside the headset to enable as thin of a form factor as possible, but unfortunately this choice affected the field of view of the headset. It will be interesting to see what Meta does down the road with its pass-through tech, especially when you consider how important pass-through will be for the next few years until waveguide technology improves enough to become mainstream.

Leia Acquires Dimenco

Without a doubt, one of the biggest pieces of news from the show—one that will likely reverberate within the industry for years—was Leia Inc.'s announcement that it acquired Dimenco, which is another 3-D display manufacturer based in the Netherlands. Dimenco's focus has primarily been on Windows users and building 3-D display technologies for laptops and monitors, enabling 3-D productivity and 3-D gaming. Leia's strengths have mostly been in smaller displays like the Red Hydrogen One or the Leia Lumepad 2, which debuted earlier this year and I reviewed here.

This acquisition will create one of the most comprehensive 3-D display manufacturers in the world, with expertise in both Windows and Android operating systems. Hopefully, this will help unify the two worlds, making the industry more cohesive and helping to drive more product volume. I don't think we understand yet what the two companies will be able to achieve working as one, but it is quite clear that together they will be able to create technologies and opportunities that simply didn't exist before.

Edible lenticular lenses

On a lighter note, one of the most fun things I saw at Siggraph 2023 was a poster by researchers from Meiji University in Japan who have managed to create edible lenticular lenses using a specially designed knife. A lenticular lens is a type of lens that allows different angles of an object to be viewed in such a way that it can appear to be three-dimensional. A friend of mine pointed out this poster, and when I went to check it out myself I saw firsthand how the researchers created an inverse structure of a lenticular lens and turned that into a knife of sorts that would then cut an edible jelly to create the desired lens shape.

While this is purely a research project without a real application, it is interesting to think about the types of applications that could utilize edible lenticular lens technology. Currently, lenticular lenses are usually made of plastic, but these researchers have used the edible lenticular lens to create color-shifting and vanishing effects, and there may be other applications in the future.

My disappointment with the organization of this year's show

Unfortunately, in terms of overall coordination Siggraph 2023 felt like a step backwards from previous years. It seemed that the organizers rushed certain planning aspects, especially because some talks were staged in rooms that were far too small. Perhaps Siggraph underestimated how many people would actually attend in person this year—after the pandemic-induced uncertainties of the past few years—and simply didn't have enough space for all of them. Whatever the case, I attended multiple talks where there were literally hundreds of people waiting in line outside the rooms, many of whom paid hundreds of dollars to attend arguably the graphics conference in the world.

In my opinion, Siggraph truly is the premier graphics conference because it encompasses researchers, artists, engineers, students and the software and hardware companies that drive so much innovation. Siggraph is a much more diverse event than GDC, which focuses primarily on gaming, and it encompasses so much of the graphics industry; it would be a shame for a conference that is usually so well-organized and -curated to take any more steps back. I have loved attending Siggraph in many prior years, but this year it felt like the show was simply not planned well enough in advance and that things were hastily put together. That's the last thing I would want from one of my favorite conferences, especially one that has been around for decades and that was celebrating the organization's 50th anniversary this year.

Moor Insights & Strategy provides or has provided paid (wish services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Adobe, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Analog Devices, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Avaya Holdings, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Elastic, Ericsson, Extreme Networks, Five9, Flex, Fortinet, Foundries.Io, Foxconn, Frame (now VMware), Frore Systems, Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ,  IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Intuit, Iron Mountain, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo,  Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, MemryX, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Movandi, Multefire Alliance, National Instruments, Neat, NetApp, Netskope, Nightwatch, NOKIA, Nortek, Novumind, NTT, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Rigetti Computing, Ring Central, Salseforce.Com,  Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Veeam, Ventana Micro Systems, Vidyo, Volumez, VMware, Wave Computing, Wells Fargo, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler.  


How Nvidia Built A Competitive Moat Around A.I. Chips

Naveen Rao, a neuroscientist turned tech entrepreneur, once tried to compete with Nvidia, the world's leading maker of chips tailored for artificial intelligence.

At a start-up that the semiconductor giant Intel later bought, Mr. Rao worked on chips intended to replace Nvidia's graphics processing units, which are components adapted for A.I. Tasks like machine learning. But while Intel moved slowly, Nvidia swiftly upgraded its products with new A.I. Features that countered what he was developing, Mr. Rao said.

After leaving Intel and leading a software start-up, MosaicML, Mr. Rao used Nvidia's chips and evaluated them against those from rivals. He found that Nvidia had differentiated itself beyond the chips by creating a large community of A.I. Programmers who consistently invent using the company's technology.

"Everybody builds on Nvidia first," Mr. Rao said. "If you come out with a new piece of hardware, you're racing to catch up."

Over more than 10 years, Nvidia has built a nearly impregnable lead in producing chips that can perform complex A.I. Tasks like image, facial and speech recognition, as well as generating text for chatbots like ChatGPT. The onetime industry upstart achieved that dominance by recognizing the A.I. Trend early, tailoring its chips to those tasks and then developing key pieces of software that aid in A.I. Development.

Jensen Huang, Nvidia's co-founder and chief executive, has since kept raising the bar. To maintain its leading position, his company has also offered customers access to specialized computers, computing services and other tools of their emerging trade. That has turned Nvidia, for all intents and purposes, into a one-stop shop for A.I. Development.

While Google, Amazon, Meta, IBM and others have also produced A.I. Chips, Nvidia today accounts for more than 70 percent of A.I. Chip sales and holds an even bigger position in training generative A.I. Models, according to the research firm Omdia.

In May, the company's status as the most visible winner of the A.I. Revolution became clear when it projected a 64 percent leap in quarterly revenue, far more than Wall Street had expected. On Wednesday, Nvidia — which has surged past $1 trillion in market capitalization to become the world's most valuable chip maker — is expected to confirm those record results and provide more signals about booming A.I. Demand.

"Customers will wait 18 months to buy an Nvidia system rather than buy an available, off-the-shelf chip from either a start-up or another competitor," said Daniel Newman, an analyst at Futurum Group. "It's incredible."

Mr. Huang, 60, who is known for a trademark black leather jacket, talked up A.I. For years before becoming one of the movement's best-known faces. He has publicly said computing is going through its biggest shift since IBM defined how most systems and software operate 60 years ago. Now, he said, GPUs and other special-purpose chips are replacing standard microprocessors, and A.I. Chatbots are replacing complex software coding.

"The thing that we understood is that this is a reinvention of how computing is done," Mr. Huang said in an interview. "And we built everything from the ground up, from the processor all the way up to the end."

Mr. Huang helped start Nvidia in 1993 to make chips that render images in video games. While standard microprocessors excel at performing complex calculations sequentially, the company's GPUs do many simple tasks at once.

In 2006, Mr. Huang took that further. He announced software technology called CUDA, which helped program the GPUs for new tasks, turning them from single-purpose chips to more general-purpose ones that could take on other jobs in fields like physics and chemical simulations.

A big breakthrough came in 2012 when researchers used GPUs to achieve humanlike accuracy in tasks such as recognizing a cat in an image — a precursor to recent developments like generating images from text prompts.

Nvidia responded by turning "every aspect of our company to advance this new field," Mr. Huang recently said in a commencement speech at National Taiwan University.

The effort, which the company estimated has cost more than $30 billion over a decade, made Nvidia more than a component supplier. Besides collaborating with leading scientists and start-ups, the company built a team that directly participates in A.I. Activities like creating and training language models.

Advance warning about what A.I. Practitioners need led Nvidia to develop many layers of key software beyond CUDA. Those included hundreds of prebuilt pieces of code, called libraries, that save labor for programmers.

In hardware, Nvidia gained a reputation for consistently delivering faster chips every couple of years. In 2017, it started tweaking GPUs to handle specific A.I. Calculations.

That same year, Nvidia, which typically sold chips or circuit boards for other companies' systems, also began selling complete computers to carry out A.I. Tasks more efficiently. Some of its systems are now the size of supercomputers, which it assembles and operates using proprietary networking technology and thousands of GPUs. Such hardware may run weeks to train the latest A.I. Models.

"This type of computing doesn't allow for you to just build a chip and customers use it," Mr. Huang said in the interview. "You've got to build the whole data center."

Last September, Nvidia announced the production of new chips named H100, which it enhanced to handle so-called transformer operations. Such calculations turned out to be the foundation for services like ChatGPT, which have prompted what Mr. Huang calls the "iPhone moment" of generative A.I.

To further extend its influence, Nvidia has also recently forged partnerships with big tech companies and invested in high-profile A.I. Start-ups that use its chips. One was Inflection AI, which in June announced $1.3 billion in funding from Nvidia and others. The money was used to help finance the purchase of 22,000 H100 chips.

Mustafa Suleyman, Inflection's chief executive, said that there was no obligation to use Nvidia's products but that competitors offered no viable alternative. "None of them come close," he said.

Nvidia has also directed cash and scarce H100s lately to upstart cloud services, such as CoreWeave, that allow companies to rent time on computers rather than buying their own. CoreWeave, which will operate Inflection's hardware and owns more than 45,000 Nvidia chips, raised $2.3 billion in debt this month to help buy more.

Given the demand for its chips, Nvidia must decide who gets how many of them. That power makes some tech executives uneasy.

"It's really important that hardware doesn't become a bottleneck for A.I. Or gatekeeper for A.I.," said Clément Delangue, chief executive of Hugging Face, an online repository for language models that collaborates with Nvidia and its competitors.

Some rivals said it was tough to compete with a company that sold computers, software, cloud services and trained A.I. Models, as well as processors.

"Unlike any other chip company, they have been willing to openly compete with their customers," said Andrew Feldman, chief executive of Cerebras, a start-up that develops A.I. Chips.

But few customers are complaining, at least publicly. Even Google, which began creating competing A.I. Chips more than a decade ago, relies on Nvidia's GPUs for some of its work.

Demand for Google's own chips is "tremendous," said Amin Vahdat, a Google vice president and general manager of compute infrastructure. But, he added, "we work really closely with Nvidia."

Nvidia doesn't discuss prices or chip allocation policies, but industry executives and analysts said each H100 costs $15,000 to more than $40,000, depending on packaging and other factors — roughly two to three times more than the predecessor A100 chip.

Pricing "is one place where Nvidia has left a lot of room for other folks to compete," said David Brown, a vice president at Amazon's cloud unit, arguing that its own A.I. Chips are a bargain compared with the Nvidia chips it also uses.

Mr. Huang said his chips' greater performance saved customers money. "If you can reduce the time of training to half on a $5 billion data center, the savings is more than the cost of all of the chips," he said. "We are the lowest-cost solution in the world."

He has also started promoting a new product, Grace Hopper, which combines GPUs with internally developed microprocessors, countering chips that rivals say use much less energy for running A.I. Services.

Still, more competition seems inevitable. One of the most promising entrants in the race is a GPU sold by Advanced Micro Devices, said Mr. Rao, whose start-up was recently purchased by the data and A.I. Company DataBricks.

"No matter how anybody wants to say it's all done, it's not all done," Lisa Su, AMD's chief executive, said.

Cade Metz contributed reporting.

Audio produced by Tally Abecassis.








This post first appeared on Autonomous AI, please read the originial post: here

Share the post

AI know what you did last summer. | by Matt Barrie | Aug, 2023

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×