Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Intel and Tencent debut AI-powered camera systems for retail

Tencent and Intel are teaming up to launch a pair of artificially intelligent (AI) products for retail, the two announced today during Tencent’s Global Partner conference in China. Both products were developed by Tencent’s YouTu Lab — its computer vision research division — and have Intel’s Movidius Myriad chips inside. The first is DeepGaze, an AI-powered camera for brick-and-mortar stores that keeps tabs on shoppers’ movements. It can track the number of customers near a given shelf display at various times throughout the day and perform hybrid object detection — some on-device and the rest in Tencent’s Intel Xeon Scalable processor-based Cloud. DeepGaze sports the Movidius Myriad 2 vision processing unit (VPU), the same chip inside Google’s Clip camera, Flir’s Firefly, and DJI’s Phantom 4 drone. It’s optimized for image signal processing and inference — the point at which a trained AI model makes predictions — on-device. “With artificial intelligence, enterprises can gain new insights about their customers to both elevate the users’ experience and drive business transformation,” said Remi El-Ouazzane, vice president and chief operating officer of Intel’s AI Products Group. He said Tencent’s new solution takes advantage “of powerful Intel … chips to enable deep neural networks to run directly on the cameras, providing real-time and actionable data for various businesses, including retail and smart buildings.” DeepGaze complements the YouBox, also announced today. It’s an on-premises server similarly designed for retail that, with the help of onboard AI systems, can ingest real-time feeds from up to 16 cameras and derive useful insights. Store owners can use it to predict sales performance and product turnover, Intel and Tencent said, enabling them to restock shelves without the need for manual inventory management. Under the hood is the Movidius Myriad X VPU, which features a dedicated hardware accelerator for AI computations. “Intel is the perfect partner for our flexible enterprise solutions,” said Simon Wu, general manager at Tencent’s YouTu Lab. “Based on Intel Movidius Myriad chips and VPUs, the YouTu camera and box perform inference at the edge in tandem with Intel Xeon Scalable processors in the cloud to provide cost-effective and flexible solutions for verticals including retail and construction.”

Tracking customers in the real world

With DeepGaze and YouBox, Tencent and Intel are dipping a tentative toe into an increasingly lucrative market: AI-driven retail analytics. They’re not the first. In June, Japan telecom company NTT East collaborated with startup Earth Eyes to create AI Guardsman, a Machine Learning system that attempts to prevent shoplifting by scanning live camera feeds for suspicious activity. Firms like Standard Cognition and Trigo, meanwhile, are leveraging machine learning to build cashierless, data-rich shopping experiences in physical stores. Tel Aviv-based Trigo, like AI Guardsman, taps a network of cameras to track customers through isles, automatically tabulate their bills, and surface coupons and other engagement opportunities. Standard Condition — which opened a location in San Francisco last month — offers its retail partners and the customers who shop with them a comparable AI-driven solution, as does Zippin, which also debuted a checkout-free store in the Bay Area recently. Amazon’s the elephant in the room. Its Amazon Go store chain employs sensors, AI, and smartphones to streamline retail flows, and it’s reportedly bent on nationwide expansion: Bloomberg reported in September that it plans to open as many as 3,000 locations by 2021, up from the four operating today. Even Microsoft’s said to be working on cashierless store technology.

AI-first strategies

Both Tencent and Intel see AI as a key part of their respective growth strategies. Tencent’s AI funding arm is one of the largest of its kind; the company has poured more capital into startups and AI chips than its biggest Chinese rivals, Baidu and Alibaba. One of its largest single investments is in robotics startup UBTech, which aims to develop a humanoid robot capable of walking downstairs and autonomously navigating unfamiliar environments. In 2017, Tencent opened an AI research lab in Bellevue, Washington led by Dr. Dong Yu, a former Microsoft engineer and pioneer in speech recognition tech. (The company’s other AI lab is based in Shenzhen.) And its YouTu Lab, which recently open-sourced some of its developer tools, is working with customers like China Unicom and WeBank on facial ID authentication. For Intel’s part, partnerships with OEMs like Tencent are a step toward its ambitious goal of capturing the $200 billion AI market. In August, it bought Vertex.ai, a startup developing a platform-agnostic AI model suite, for an undisclosed amount. Meanwhile, the chipmaker’s acquisition of Altera brought field programmable gate array (an integrated, reconfigurable circuit) into its product lineup, and its purchases of Movidius and Nervana bolstered its real-time processing portfolio. Of note, Nervana’s neural network processor, which is expected to begin production in late 2019, can reportedly deliver up to 10 times the AI training performance of competing graphics cards. “After 50 years, this is the biggest opportunity for the company,” Navin Shenoy, executive vice president at Intel, said at the company’s Data Centric Innovation Summit this year. “We have 20 percent of this market today … Our strategy is to drive a new era of data center technology.”

Microsoft today announced it is embracing Chromium for Edge browser development on the desktop. The news includes plenty of exciting changes, including the decoupling of Edge from Windows 10, more frequent updates, and support for Chrome extensions. But we also wanted to find out what other major browser makers think of the news. Google largely sees Microsoft’s decision as a good thing, which is not exactly a surprise given that the company created the Chromium open source project. “Chrome has been a champion of the open web since inception and we welcome Microsoft to the community of Chromium contributors,” a Google spokesperson told VentureBeat. “We look forward to working with Microsoft and the web standards community to advance the open web, support user choice, and deliver great browsing experiences.” What Google’s statement doesn’t say is the company still isn’t happy with Edge. The Microsoft Store still doesn’t allow non-EdgeHTML browsers, meaning devices running Windows 10 S Mode can’t install Chrome, Firefox, or any third-party browser. Microsoft has yet to say if that will change. Mozilla meanwhile sees Microsoft’s move as further validation that users should switch to Firefox. “This just increases the importance of Mozilla’s role as the only independent choice,” a Mozilla spokesperson told VentureBeat. “We are not going to concede that Google’s implementation of the web is the only option consumers should have. That’s why we built Firefox in the first place and why we will always fight for a truly open web.” Mozilla regularly points out it develops the only independent browser — meaning it’s not tied to a tech company that has priorities which often don’t align with the web. Apple (Safari), Google (Chrome), and Microsoft (Edge) all have their own corporate interests.

A Chromium-based Edge means a lot for the few users that actively use Edge, but much more interesting will be the impact on the broader web. Chrome dominates already — will this only cement its place or will the competition heat up? We also contacted Apple and Opera and will update this story if we hear back. Update at 12:10 p.m. Pacific: Opera thinks Microsoft is making a smart move, because it did the same thing six years ago. “We noticed that Microsoft seems very much to be following in Opera’s footsteps,” an Opera spokesperson told VentureBeat. “Switching to Chromium is part of a strategy Opera successfully adopted in 2012. This strategy has proved fruitful for Opera, allowing us to focus on bringing unique features to our products. As for the impact on the Chromium ecosystem, we are yet to see how it will turn out, but we hope this will be a positive move for the future of the web.”

Samsung to build 5G and V2X networks for autonomous car tests at South Korea’s K-City

Samsung is collaborating with the Korea Transportation Safety Authority (KOTSA) to develop mobile network infrastructure for autonomous vehicles at the recently opened K-City test facility. K-City, for the uninitiated, is one of a number of “fake cities” that have emerged as test beds for the latest smart city technologies. Google parent company Alphabet last year offered a glimpse into Castle, a key test hub for its driverless car subsidiary, Waymo. Incidentally, Waymo launched its first commercial self-driving car service in Phoenix just yesterday. And Russia opened a tech-focused “town” called Innopolis back in 2012, where Yandex recently kickstarted tests for its autonomous taxis. Against that backdrop, the South Korean government announced K-City back in May, though it only partially opened for business last month.

Largest

At 320,000 square meters, Korea is touting K-City as the largest dedicated facility for testing self-driving cars in the world, built to replicate all manner of real-world scenarios, including bus lanes, bike lanes, highways, built-up urban areas, parking bays, and more. Situated about an hour’s drive south of Seoul in the city of Hwaseong, K-City cost around $10 million to build.

5G … or not 5G

For those who haven’t followed the latest developments in 5G, the fifth generation of mobile communications, it represents much more than crazy download speeds on your smartphone — though that will be one benefit. Effectively, 5G will be the biggest enabler of artificial intelligence (AI) and many other technologies across the smart city and autonomous vehicle spectrum. Samsung is one of the technology companies at the forefront of the 5G push, and it recently set aside $22 billion to plow into a range of transformative technologies, including 5G and AI. That Korean juggernaut Samsung has been selected to help build the infrastructure underpinning autonomous car tests at K-City should perhaps come as little surprise. “The prominence of autonomous vehicles and connected cars is growing rapidly in the 5G era, and Samsung’s commitment to collaborative innovation in this area is stronger than ever,” said Samsung executive Jaeho Jeon in a press release. The Samsung/KOTSA collaboration isn’t just about 5G, however — it will cover 4G LTE, vehicle-to-everything (V2X) communication systems, and related hardware infrastructure. “By building various telecommunication networks — including 5G, 4G, and V2X — in one place, K-City will provide real-world experiences of autonomous driving for people and businesses across the industry,” added KOTSA director Byung Yoon Kwon. “This open environment is expected to be served as a unique innovation lab for industry partners that will ultimately [accelerate] the availability of the autonomous driving era.”

1 billion AR/VR ad impressions: What we’ve learned

Two years ago, Mark Zuckerberg donned a VR headset at Mobile World Congress against a backdrop that read “the next platform.” It sparked fervor and big investment in the VR space. But in the past year, many critics have questioned the viability of the industry, as headset sales underwhelm and buzzy technologies like blockchain and AI debut as the new starlets in town.And yet, as an immersive ad serving technology working closely with brands, publishers, and producers, we have seen the demand for VR/AR marketing solutions accelerate, even amidst the supposed slump of VR. A year ago we served 100 million VR ad impressions. This year, we’ve served over 1 billion.As we set out to show how immersive advertising beats traditional digital advertising, we learned a few lessons along the way. Here are our three key learnings from serving 1 billion VR/AR ad impressions.

Prove campaign performance

A year ago, our top goal was identifying how VR could best be applied to brands’ marketing objectives. We learned that brands investing in VR care most about 1) deep audience engagement and 2) audience reach. This still holds true today, but brands are now expecting real ROI from their investment into this medium. A year ago, it was easier to convince brands to try VR as a trendy innovation test. Now, the trial period is over and a clear explanation of VR’s contribution to meeting (and exceeding) marketing objectives is needed for brands to continue to invest. It is important to not only sell VR, but to sell solutions that meet customers’ business objectives. It is easy to say that VR performs, but it is an entirely different equation when it comes to proving it. Advertising technology for digital media has become a precise science over the past 20 years. As such, brand’s expectations regarding the accuracy of what is reported is extremely high (and rightfully so). It is not enough to simply create a piece of VR content. Brands need to know how many people saw, engaged, watched and ultimately converted as a result of their investment into this content. This is how brands and their agencies currently operate. As an industry, we will scale faster if we can fit into our client’s existing campaign operations. This means that as a provider that serves VR ads, we not only report on Viewability, Completion Rate, and Engagement Rate, but we also ensure we can reliably compare how a 360-degree VR ad performs to what our clients are currently running in 2D formats, using standard metrics. To prove the uplift of VR ads over 2D in an accurate and client-friendly way, we take the following approach:

  • Run both existing 2D creative and new 360-degree VR experiences to compare the performance across the same placement
  • Use standard ad tracking tools to measure the uplift
  • To validate the data further, install 3rd party tracking pixels from companies like DoubleClick, MOAT, and DoubleVerify

We do this for all of our campaigns.  And the results are clear: 360-degree VR ads outperform 2D. To further understand these results, we ran ads with different fields-of-view: 90 degrees, 180 degrees, and 360 degrees. And we noticed something interesting. As the field-of-view broadened, CTRs and engagement time increased. This means that the bigger the content sphere, the more engaging (and immersive) the content is. With this we clearly and indisputably meet brand objective No. 1: deepen audience engagement. We also tested performance across different verticals and media segments to see if we could replicate these results. We found that this uplift in performance is consistent across all industries and use cases. In all industries, the same client and campaign will see greater performance from immersive ads compared to the 2D creative they typically run. And this is why our business in particular has seen repeat business. Customers like Universal Pictures, Travel Nevada, Clorox, Cathay Pacific, The Home Depot, and Disney Broadway have launched multiple VR ad campaigns. These are examples of companies that are not only want creative innovation, but require performance results to continue to invest in this medium. We’re seeing a shift coming but it will take some work to guarantee it. Eighty-eight billion dollars each year goes into digital advertising; 99 percent of this spend is going into 2D experiences, even when the data clearly shows that consumers are more engaged with 3D content. It’s the VR industry’s job to prove and communicate this.

Deliver experiences that scale across platforms

In addition to performance and deepening audience engagement, we can’t forget about ensuring reliability in our technology solution. Remember the days when Internet Explorer, Firefox and Chrome all rendered the same website in a different way? Distributing VR content across all platforms and browsers faces a similar challenge today. It’s like a rewind back to Web 2.0, except worse. Today, the modern web era includes mobile web browsers like Safari, a variety of Android browsers, and embedded web viewers inside mobile native apps. Each environment has its own restrictions, which VR must overcome. No brand wants to deliver a broken experience to their audience because of these restrictions. We needed to address this fragmentation to make sure we met brand objective No. 2: audience reach. If the audience needs to download an app and/or receives a broken experience when trying to access the VR content, than brands cannot maximize their content’s reach. We found that even YouTube’s 360-degree video player doesn’t work on iPhone (even with the newest Safari and iOS) nor on many Android browsers. As such, we tailored our 3D graphics rendering technology to ensure high audience engagement across all browsers, web and mobile. To be successful in bringing innovation to customers, it is critical to have your product work seamlessly where audiences are today.

Show, don’t tell

In any new exciting industry, there will be a lot of new players and noise. While anyone can speak to their capabilities, the best way to convince a brand or agency to commit their dollars to your product is a live demo on their devices. Customers need to see it to believe it. We found that talking about the promise of our solution wasn’t enough. This is especially applicable in VR and AR, where most companies are very new and the customer is not yet familiar with their product. This is why every new product needs a live demo before for the sales and marketing pitch begins. This may sound like common sense, but if you do a quick review of most VR and AR startups you will see that more than 90 percent only have marketing material on their landing page with no live demo or self-service product to experience. Our first customer in the US, The New York Times, chose to use our platformbecause they could first try a live demo of our ad solution. After an engineer validated the technology, they reached out with confidence that we were the right partner for their VR advertising needs. Live demos are critical to show not tell the magic of immersive content and we found we had to do this for every ad format available in the industry today. Here are a few examples:

Force Push turns you into a Jedi in VR

Ever wanted to wield the power of a Jedi inside VR? This new system from Virginia Tech researchers lets you do just that. Force Push is a new object manipulation system for VR being worked on at the institution’s College of Engineering. It uses hand-tracking (namely a Leap Motion sensor fitted to the front of the Oculus Rift) to allow users to push, pull and rotate virtual objects from a distance, just like a Skywalker would. Run Yu, Ph.D. Candidate in the Department of Computer Science, and Professor Doug Bowman have been working on it for some time, as can be seen in the video below. The pair’s research was recently published in a new report. As the footage shows, objects are moved simply by gesturing in the way you want them to go. Motion towards yourself to bring an item closer towards you, flick your hand up to raise it off of the ground and, of course, push your hand outwards to have it shoot off into the distance. You can even raise your index finger and make a rotating motion to turn the object around. It’s a pretty cool system, though we’d like to see it working without the repeated gestures. Hand-tracking itself is some ways out from full implementation inside VR headsets, but laying groundwork such as this will help make it a more natural fit if and when it does get here. “There is still much to learn about object translation via gesture, such as how to find the most effective gesture-to-force mapping in this one case (mapping functions, parameters, gesture features, etc.),” the pair wrote in their report. “We plan to continue searching for improved transfer functions from the gesture features to the physics simulation. Further evaluation of Force Push will focus on more ecologically valid scenarios involving full 3D manipulation.” Now if only we could use this is an actual Star Wars VR game?

Vehicle telematics data could unlock $1.5 trillion in future revenue for automakers

  Vehicle telematics, the method of monitoring a moving asset like a car, truck, heavy equipment, or ship, with GPS and onboard diagnostics, produces an extraordinarily large and fast-moving stream of data that did not exist even a few years ago. And now, the vehicle telematics data hose has been turned to full blast. By 2025, there will be 116 million connected cars in the U.S. — and according to one estimate by Hitachi, each of those connected cars will upload 25 gigabytes of data to the cloud per hour. If you do the math, that’s 219 terabytes each year, and by 2025, it works out to roughly 25 billion terabytes of total connected car data each year. It’s a tsunami of data that did not exist even a few years ago, and it’s about to transform the transportation industry, says Grant Halloran, Chief Marketing Officer at OmniSci.

An entirely new transportation industry

For auto manufacturers, revenue used to come almost exclusively from one-time vehicle sales and trailing maintenance. But as populations are becoming more urban and traffic congestion becomes a bigger problem, this puts downward pressure on the number of cars demanded (and reduces margins on one-time car sales). “There are these irreversible trends going on in the marketplace, like ride sharing, better (and new forms of) public transport and increasing urbanization, which cause people to be less and less likely over time to buy their own car,” Halloran says. “The automakers are saying, we have this hub of data we control, but how are we going to monetize it?” The data that connected cars and autonomous vehicles produce open up entirely new revenue streams that the automaker can control (and share with partners in other sectors). According to McKinsey, monetizing onboard services could create USD $1.5 trillion – or 30 percent more – in additional revenue potential by 2030, which will more than offset any decline in car sales. And this data on how a driver and vehicle interact can also give automotive manufacturers, logistics companies, fleet managers, and insurance companies valuable information on how to make transportation safer, more efficient, and more enjoyable — but they must be able to handle the new huge streams of data and analyze those to extract insights.

What is vehicle telematics?

Vehicle telematics is a method of monitoring and harvesting data from any moving asset, like a car, truck, heavy equipment, or ship by using GPS and onboard diagnostics to record movements and vehicle condition at points in time. That data is then transmitted to a central location for aggregation and analysis, typically on a digital map. Telematics can measure location, time, and velocity; safety metrics such as excessive speed, sudden breaking, rapid lane changes, or stopping in an unsafe location, as well as maintenance requirements; and in-vehicle consumption of entertainment content. “For example, we have a major automaker doing analysis of driver behavior for improvements to vehicle design and potentially, value-added, in-car information services to the driver,” Halloran says. Traditional analytics systems are unable to handle that extreme volume and velocity of telematics data, and they don’t have the ability to query and visualize it within the context of location and time data, also known as spatiotemporal data. Next-generation analytics tools like OmniSci enable analysts to visually interact with telematics data at the speed-of-curiosity

The challenges of extracting insights from telematics data

The insights are there; the discovery is the difficult part, as per usual when it comes to data analytics. But vehicle telematics pose some unique obstacles that industry leaders are scrambling to tackle. The data challenges are enormous. Mainstream analytics platforms can’t handle the volume of the data generated, or ingest data quickly enough for real-time use cases like real-time driver alerts about weather and road conditions. And very few mainstream platforms can manage spatiotemporal data​. Those that do slow to a crawl at a few hundred thousand records, a miniscule volume compared to what connected cars are already generating. Data wrangling​ has also become a stumbling block. Automakers have already built dedicated pipelines for known data streams, primarily from in-car data generation. But this requires large footprints of hardware, and as new data sources arise, those are very difficult to ingest and join with existing data sources. IT departments spend a lot of low-value time and money just wrangling data so that they can try to analyze it.

Tackling the challenges

Because telematics data is so variable and contextual, it is essential that humans explore those big data streams, Halloran says. For vehicle telematics analysis, you need to be able to query billions of records and return results in milliseconds, and also load data far more quickly than legacy analysis tools can, particularly for streaming and high-ingest-rate scenarios. You need to tackle spatiotemporal data with hyper-speed, as you calculate distances between billions of points, lines, or polygons or associate a vehicle’s location at a point in time with millions of geometric polygons, which could represent counties, census tracts, or building footprints. Vehicle telematics data, like other forms of IoT data, is a valuable resource for data scientists who want to build machine learning (ML) models to improve autonomous-driving software and hardware and predict maintenance issues. Machine learning is often presented as conflicting with ad hoc, data analysis by humans. Not so, says Halloran. Exploratory data analysis (or EDA) is a necessary step in the process of building ML models. Data scientists need to visually explore data to identify the best data features to train their models, or combine existing features to create new ones, in a process called feature engineering. Again, this requires new analytics technology to be done at scale. Transparency is also essential with machine learning, especially in regulated industries like automotive and transport, Halloran adds. When models are in production, making autonomous recommendations, data scientists have a need to explain their black-box models to their internal business sponsors and potentially to regulators. Business leaders are reticent to allow machine learning models to make important decisions if they can’t understand why those decisions are made. “ML models can’t be fired. Human decision-makers can,” notes Halloran. An intuitive, interactive visualization of the data in the model allows data scientists to show others what the model “sees in the data” and more easily explain its decisions, allowing decision-makers to be confident that machine-driven predictive decisions will not breach laws. “One of our automotive customers calls this ‘unmasking the black box,” says Halloran.

Point of no return: the impact on other industries

Automotive and mobility is generalizing into a much broader set of solutions that crosses a lot of traditional industry segments. It’s not just automakers now that are doing mobility. Telecommunications companies are helping transmit data or delivering infotainment into a car. Civic authorities want to look at this data to figure out which roads they should repair and how they can improve mass transit. Retailers want to advertise to people in the car or provide a high-end concierge experience as buyers travel to shopping destinations. “For the future, if the automakers do claim ownership of the primary source of mobility data, they will build partnerships across traditional barriers that have divided industries,” Halloran says. “That provides new opportunities for cooperation, and also new opportunities for competition. One of the best ways to come out ahead in that new landscape is to understand what the data tells them, so that they can go into the relationships that are going to be the most profitable for them with that telematics data.”

StarVR puts developer program ‘on hold’ as financial woes roil Starbreeze

Less than a month after StarVR started accepting applications for its $3,200 developer kit program, the company has confirmed to UploadVR that it’s putting the process “on hold.” Last month, StarVR stated that its first production units for StarVR One were ready. Developers could apply to purchase the headset, which featured 210-degree horizontal-by-130-degree vertical field of view, dual AMOLED panels, integrated eye-tracking and SteamVR 2.0 tracking (though no SteamVR base stations to actually track the device). Thursday, we also reported on the StarVR’s claims that its headset would be the first to support the new VirtualLink standard. But trouble was brewing surrounding the announcement. Ahead of the launch, StarVR announced that it was delisting StarVR from the Taipei Exchange Emerging Markets board, citing the current state of the VR industry as one reason. Then, earlier this week, we learned that headset creator Starbreeze, which now owns around a third of StarVR (the other two-thirds belonging to Acer), had filed for reconstruction with the Stockholm District Court. Its offices have been raided this week, leading to one arrest linked to insider-trading. Today a StarVR spokesperson provided UploadVR with the following statement: “We believe it is the most responsible course of action to put the StarVR Developer Program on hold while there are uncertainties with our key overseas shareholder, and also while our company is in the process of going private, which may entail some changes to our operations.” The same message has been sent to anyone that had enrolled in the program thus far. The statement certainly seems to refer to Starbreeze’s current difficulties. It’s uncertain what this means for the future of the VR headset, which had been designed for location-based and enterprise experiences. One thing is likely; developers will have to wait at least a little longer to get their hands on the hardware if it does indeed ever reach their doorsteps.

VR veterans found Artie augmented reality avatar company

The migration of virtual reality veterans to augmented reality continues. A new AR startup dubbed Artie is coming out of stealth mode today in Los Angeles with the aim of giving you artificial intelligence companions in your own home. Armando Kirwin and Ryan Horrigan started the company to use artificial intelligence and augmented reality to build “emotionally intelligent avatars” as virtual companions for people. Those avatars would be visible anywhere that you can take your smartphone or AR gear, Horrigan said in an interview. The startup has backing from a variety of investors, including YouTube cofounder Chad Hurley, Founders Fund, DCG, and others. But Kirwin said the company isn’t disclosing the amount of the investment yet. The company’s software will enable content creators to bring virtual characters to life with its proprietary Wonderfriend Engine, which makes it easy to create avatar-to-consumer interactions that are lifelike and highly engaging. Kirwin said the company is working with major entertainment companies to get access to familiar characters from famous brands. “Our ambitions is to unlock the world of intellectual property you are already familiar with,” said Kirwin, in an interview with VentureBeat. “You can bring them into your home and have compelling experiences with them.” The company hopes to announce some relationships in the first quarter, Kirwin said. Once created, the avatars then exist on an AR network where they can interact and converse with consumers and each other. It reminds me of Magic Leap’s Mica digital human demo, but so far Artie isn’t showing anything quite as fancy as that yet. “The avatar will use AI to figure out whether you are happy or sad and that would guide it in terms of the response it should have,” Kirwin said. “Some developers could use this to create photoreal avatars or animated characters.” Artie is also working on Instant Avatar technology to make its avatars shareable via standard hyperlinks, allowing them to be discovered on social media and other popular content platforms (i.e. in the bio of a celebrity’s Instagram account, or in the description of a movie trailer on YouTube). Horrigan said that the team has 10 people, and it is hiring people with skills in AI, AR, and computer vision. One of the goals is to create avatars who are more believable because they can be inserted in the real world in places like your own home. The team has been working for more than a year. “Your avatar can be ready, so you don’t have to talk to it to activate it,” Kirwin said. “It’s always on, and it’s really fast, even though it is cloud based. We can recognize seven emotional states so far, and 80 different common objects. That’s where the technology stands today.” Horrigan was previously chief content officer of the Comcast-backed immersive entertainment startup Felix & Paul Studios, where he oversaw content and business development, strategy and partnerships. Ryan and his team at Felix & Paul forged numerous partnerships with Fortune 500 companies and media conglomerates including Facebook, Google, Magic Leap, Samsung, Xiaomi, Fox and Comcast, and worked on projects with top brands and A-list talent such as NASA and Cirque du Soleil. One of Felix & Paul’s big projects was a virtual reality tour of the White House with the Obamas. That project, The People’s House, won an Emmy Award for VR, as it captured the White House as the Obama family left it behind. Prior to Felix & Paul, Horrigan was a movie studio executive at Fox/New Regency, where he oversaw feature film projects including Academy Award Best Picture Winner 12 Years A Slave. He began his career in the Motion Picture department at CAA and at Paramount Pictures. Ryan has given numerous talks, including at Ted, Cannes, Facebook, Google, Sundance, SXSW and throughout China. He holds a Bachelor’s in Film Studies and lives in Los Angeles, California. Kirwin has focused on VR and AR in both Hollywood and Silicon Valley. He has helped create more than 20 notable projects for some of the biggest companies in the world. These projects have gone on to win four Emmy nominations and seven Webby nominations. Prior to co-founding Artie, Kirwin helped create the first 4K streaming video on demand service, Odemax – which was later acquired by Red Digital Cinema. He was later recruited by Chad Hurley, cofounder and ex-CEO of YouTube, to join his private technology incubator in Silicon Valley. Prior to his career in immersive entertainment, Kirwin worked on more than 50 projects, predominantly feature films, which include “The Book of Eli,” the first major motion picture shot in digital 4K. He also acted as consultant to vice president of physical production at Paramount Pictures. Other investors include Cyan Banister (investing personally), The Venture Reality Fund, WndrCo, M Ventures, Metaverse Ventures, and Ubiquity6 CEO Anjney Midha. Artie has already cemented partnerships with Google and Verizon for early experiments with its technology and is beginning to onboard major media companies, celebrities, influencers, and an emerging class of avatar-based entertainment creators.

Kaggle users can now create Google Data Studio dashboards

Kaggle, a Google-owned community for AI researchers and developers that offers tools which help to find, build, and publish datasets and models, is integrating with Google’s Data Studio. The Mountain View company announced the news in a blog post timed to coincide with the NeurIPS 2018 conference in Montreal this week. Starting this week, users can connect to and visualize Kaggle datasets directly from Data Studio using Kaggle’s Community Connector tool. It’s as simple as browsing for a dataset within Kaggle, picking a file, launching Data Studio with the selected file, and creating an interactive dashboard with Data Studio’s built-in tools. From that point, the dashboard can be published and embedded in a website or blog. Google is also making available the connector code for the integration in open source in the Data Studio Open Source Repository, which it says will help Data Studio developers and Kaggle users to build “newer and better solutions.” “[With] this new integration, users can analyze these datasets in Kaggle; and then visualize findings and publish their data stories using Data Studio,” Minhaz Kazi, a developer advocate at Google, and Megan Risdal, product lead at Kaggle Datasets, wrote in a blog post. “Since there is no cost to use Data Studio and the infrastructure is handled by Google, users don’t have to worry about scalability, even if millions of people view the dashboard … The hassle-free publishing process means everyone can tell engaging stories, open up dashboards for others to interact with, and make better-informed decisions.” The integration comes a little over a year after Google’s acquisition of Kaggle, which was announced in March at the Cloud Next 2017 conference in San Francisco. Google claims that it’s the world’s largest online community of data scientists, with over two million users (up from 1 million in June 2017) and over 10,000 public datasets. Users compete against each other in competitions, testing techniques on real-world tasks for prize pools.

Google Cloud says Security Command Center beta is live with expanded risk-monitoring tools

Google Cloud today announced availability of its Cloud Security Command Center beta — with a series of new features designed to more quickly identify vulnerabilities and limit damage from threats or attacks. Cloud SCC offers a centralized view that gives users a clear picture of all their cloud assets, according to a blog post by Andy Chang, senior product manager for Google Cloud. “If you’re building applications or deploying infrastructure in the cloud, you need a central place to unify asset, vulnerability, and threat data in their business context to help understand your security posture and act on changes,” he wrote. The Cloud SCC was released in alpha last March with the goal of giving more users across an organization a clear view of security issues. The beta version adds:

  • Expanded coverage across GCP services such as Cloud Datastore, Cloud DNS, Cloud Load Balancing, Cloud Spanner, Container Registry, Kubernetes Engine, and Virtual Private Cloud
  • Expanded administrator roles
  • A wider range of notifications
  • Better searching of current and historic assets
  • More client libraries

Deep learning Slack bot Meeshkan wins Slush 100 startup competition

Meeshkan, a company whose Slack bot helps engineers monitor and train machine learning models without leaving the team chat app, has been named winner of the Slush 100 startup competition. Meeshkan works with popular frameworks like PyTorch and TensorFlow and is optimized for deep learning workflows. The competition took place today in Helsinki at Slush, one of the largest annual tech conferences held in northern Europe. “With our interactive machine learning product, out of the box and for free you get monitoring of all your machine learning jobs on Slack,” Meeshkan CEO Mike Solomon said during his pitch. “On top of that, you’re able to schedule as many jobs as you want right from Slack, pause long setting jobs that are executing, tweak parameters for the job that’s executing, fork a job just like you fork a repo on GitHub, and under the hood it will automatically spin up, provision a server, and send the job off and running.“ Meeshkan competed in the semifinals of Slush 100 against Aerones, a heavy-lift drone company that cleans wind turbines and wants to fight fires with drones, and Lifemote Networks, a SaaS service for internet services providers that predicts Wi-Fi troubleshooting using AI. More than 1,000 applications from 60 countries around the world were received for the startup competition, according to organizers. By supplying a tool that enables engineers and data scientists to train models without the full understanding of how to train and deploy a model from scratch, like a machine learning engineer, Meeshkan intends to help companies address the widespread shortage of data scientists. In a PricewaterhouseCoopers study released earlier this year, only 4 percent of business executives said their company has successfully implemented AI in their products or services, but that’s expected to change in the years ahead. The Slush tech conference was attended by 20,000 people. Among them: more than 1,000 investors and more than 3,000 startups. By category, the largest group of startups in attendance self-identified as AI, big data, or machine learning companies.

Hire by Google’s candidate discovery tool exits beta

Hire by Google, the hiring dashboard that’s part of Google’s enterprise-focused G Suite platform, launched a little over a year ago in June. Since then, it’s gained a feature — candidate discovery — that surfaces appropriate candidates for new gigs at a company, along with a veritable suite of AI-powered calendar scheduling, resume review, and phone call tools. Today, candidate discovery, which rolled out to select customers in beta earlier this year, is becoming generally available to all G Suite customers who pay for Hire.

Coinciding with candidate discovery’s wider launch, Google’s debuting a new capability that product manager Omar Fernandez says was informed by Hire’s beta testers. It allows customers to screen resumes with smart keyword highlighting based on their search criteria, and to re-engage qualified candidates in bulk.   “Throughout the beta period, we listened to customer feedback, and as a result [we introduced this] new feature in candidate discovery,” he wrote in a blog post. “Since the … release of candidate discovery … we’ve heard from many customers how it’s helped them quickly fill open roles at their companies … [One company] was able to fill one of its roles in 24 hours (the average time to hire is four weeks).” Two of those customers are OpenLogix, a global technology service firm, and Titmouse, an animation studio. OpenLogix uses candidate discovery to search a database of 30,000 prior candidates and create a prioritized list based on how well the candidate’s profile matches the title, job description, and location. Meanwhile, Titmouse taps it to manage thousands of applications submitted through the company’s careers page. Fernandez noted that candidate discovery is powered by Google’s Cloud Talent Solution (formerly Cloud Job Discovery), a development platform for job search workloads that factors in desired commute time, mode of transit, and other preferences in matching employers with job seekers. It also drives automated job alerts and saved search alerts. According to Google, CareerBuilder, which uses Cloud Talent Solution, saw a 15 percent lift in users who view jobs sent through alerts and 41 percent increase in “expression of interest” actions from those users. Hire by Google, for the uninitiated, is a full stack recruitment tool that lets hiring managers sift through job listings, interview and screen candidates, solicit applications, and more. It natively integrates with Gmail, Google Calendar, and Sheets, automatically filling in details such as contact information in invites and recording data captured across interviews. Moreover, thanks to artificial intelligence (AI), it’s able to recommend appropriate time slots for meetings and interviews, analyze key terms in job descriptions, and highlight candidates’ phone numbers and log calls. News of candidate discovery’s general availability follows on the heels of Google’s job search feature for military veterans, which launched in August. It aims to make it easier for service members to find civilian jobs that align with their occupation, in part by finding jobs in their area that require skills similar to those used in their military role. Companies that use Cloud Talent Solution can implement the job search feature on their own career sites.

6 things a first-time CEO needs to know

CEO turnover is on the rise across corporate America. The number of top executives who have left their jobs in 2018 has reached a 10-year high, according to outplacement firm Challenger, Gray & Christmas.

Reasons for the trend vary, the firm says, from natural movement in a tight labor market, to economic uncertainty, to a desire by companies, in light of #MeToo, “to let go of leaders that do not fit their culture or otherwise act unethically.”

Whatever the causes, the high turnover has an interesting side effect: more opportunities for executives seeking their very first CEO gigs. These vice presidents, general managers, chief financial officers, et al, typically have spent years working toward a shot at the top spot. The current direction toward fresh blood in the corner office means more can get there.

And yet most have no idea about the challenges they’ll encounter.

It’s a subject close to my heart. After several years in leadership roles at Salesforce and Oracle, I became a first-time CEO at a smaller tech company in fall 2015. I left at the beginning of 2018 as part of a corporate restructuring that would have required me and my family to relocate. Five months later, I landed my second CEO post, at another tech firm.

The first realization that smacked me in the face as a rookie: It’s a really hard job. I had naively thought being CEO would be only incrementally more difficult than other positions I’d held. Running a business unit within a company carries a lot of responsibility, right?

Yes, but it’s not even close to the same. As CEO, the buck stops here on company success (or lack thereof), culture, brand satisfaction, product quality, funding, communication with the board of directors, and a host of other priorities that don’t really sink in until you’re in the big chair. It’s all a huge thrill, but it also requires a massive adjustment in how the freshly minted CEO thinks and acts.

I reflect frequently on the lessons I learned as a first-time CEO and how I can apply them at my new company. Here are six of the most important ones, in no particular order.



This post first appeared on Ecommerce, please read the originial post: here

Share the post

Intel and Tencent debut AI-powered camera systems for retail

×

Subscribe to Ecommerce

Get updates delivered right to your inbox!

Thank you for your subscription

×