Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

What is Artificial Intelligence? Types, History, and Future [2023 Edition]



ai applications :: Article Creator

Companies Know That Trustworthy And Responsible AI Is A Business Imperative — Why Are They Hesitating?

Presented by Outshift by Cisco

Generative AI has actually been around for a while, but in just months, ChatGPT democratized AI for every single person with a connection to the internet, taking hold of the imagination of business leaders and the public alike. With the technology evolving at a record-breaking pace and getting implemented far and wide, embracing responsible AI to manage its ethical, privacy and safety risks has become urgent, says Vijoy Pandey, senior vice president at Outshift by Cisco. (Outshift is Cisco's incubation engine, exploring the kinds of transformative technology critical to today's environment in new and emerging markets.)

"Every aspect of our personal and business lives, across industries, has been impacted by generative AI," Pandey says. "Putting a responsible AI framework in place is crucial, now that AI has broken free from specific use cases for specific products, and is embedded in everything we do, every day."

The risks and real-world cost of irresponsibility

AI is enabling tremendous innovation, but technology leaders must understand that there's a real-world cost involved. When AI does good, it can transform lives. When AI goes unattended, it can have a profound impact not just on a company's bottom line, but on the humans whose lives it touches. And generative AI brings its own brand-new set of issues. It's a big swing of the pendulum away from predictive AI, recommendations and anomaly detection, to an AI that actually delivers ostensibly new content.

"We're not only looking at privacy and transparency, we're starting to look at IP infringement, false content, hallucinations, and more."

"I call it regenerative AI, because it uses things that exist, cobbles them together, and generates new audio content, videos, images, text," Pandey says. "Because it's generating content, new issues creep in. We're not only looking at privacy and transparency, we're starting to look at IP infringement, false content, hallucinations, and more."

Customer data and proprietary company IP is at risk as these generative AI models hoover up all the data available to them across the internet. When an AI engine is asked a question or sent a prompt, there's a real danger of sending data that shouldn't be public, if there are no guardrails in place. It's also increasingly easy for these AI engines to learn and train on proprietary data sets – Getty Image's lawsuit against Stable Vision, a generative AI art tool, is a stark example. And the risk is growing as the technology becomes more powerful and more deeply embedded into company infrastructure.

A framework for trustworthy and responsible AI

With the emergence of generative AI, a responsible AI framework must place an emphasis on IP infringement, unanticipated output, false content, security and trust.

The move toward security and trust in this framework also means ensuring that there is responsibility baked into every AI initiative.

"Trustworthy AI is the bigger umbrella we're all starting to look at," Pandey explains. "It's not just about being unbiased, transparent and fair. It's also about making sure we're not generating faulty or distorted content, or violating copyright laws."

The move toward security and trust in this framework also means ensuring that there is responsibility baked into every AI initiative, with clear lines of authority, so that it's easy to identify who or what is liable, if something goes wrong.

Transparency reinforces trustworthiness, because it gives agency back to customers, in situations where AI is being used to make decisions that affect them in material and consequential ways. Keeping communications channels open helps build the trust of customers and stakeholders. It's also a way to mitigate harmful bias and discriminatory results in decision-making, to create technology that promotes inclusion.

For instance, the new security product Outshift is developing, Panoptica, helps provide context and prioritization for cloud application security issues — which means it's handling hugely sensitive information. So to ensure that it doesn't expose any private information, Outshift will be transparent about the unbiased synthetic data it trains the model on.

Accountability is about taking responsibility for all consequences of the AI solution, including the times it does jump the fence.

And when Cisco added AI for noise suppression in Webex for video meetings, which cancels any noise besides the voices of the attendees in front of their computers, it was crucial to ensure the model wasn't being trained on conversations that included sensitive information, or private conversations. When the feature rolled out, the company was transparent about how the model was trained, and how the algorithms work to ensure it remains bias-free, fair and stays fixed in its lane, training only on the correct data.

Accountability is about taking responsibility for all consequences of the AI solution, including the times it does jump the fence and suddenly begins operating outside its intended parameters. It also includes making privacy, security and human rights the foundation of the entire AI life cycle, which encompasses protection against potential cyberthreats to improve attack resiliency, data protection, threat modeling, monitoring and third-party compliance.

Even if a system isn't threatened from the outside by malicious actors, there's always a risk of inaccurate results, for generative AI, in particular. That requires systematic testing of an AI solution once it's launched to maintain consistency of purpose and intent, across unforeseen conditions and use cases.

"Responsible AI is core to our mission statement, and we've been a champion of the responsible AI framework for predictive AI since 2021," Pandey says. "To us, it's part of the software development life cycle. It's as embedded in our processes as a security assessment."

Implementing trustworthy and responsible AI: Beyond people and processes

"First and foremost, it's imperative that C-suites start educating their teams and start seriously thinking about responsible AI, given the pervasiveness of the technology, and the dangers and the risks," Pandey says. "If you look at the framework, you see it requires cross-functional teams, from the security and trust side to engineering, IT, government and regulatory teams, legal, and even HR, because there are ramifications both internally and in partnerships with other companies."

"It requires cross-functional teams, from the security and trust side to engineering, IT, government and regulatory teams, legal, and even HR because there are ramifications both internally and in partnerships with other companies."

It starts with education concerning the risks and pitfalls, and then building a framework that matters, customized to your own use cases and using language that every team member can rally behind, so that you're all on the same page. The C-suite then needs to build out required business outcomes, because without that, all of these remain best-effort initiatives.

"If the entirety of the world is moving toward digitization, then AI, data and responsible AI become a business imperative," he says. "Without building a business value into every use case, these efforts will just disappear over time."

He also notes that as we move from predictive to generative AI, the world becomes increasingly digitized, and the number of use cases multiply, the machines and software and tools that power these solutions independently will also need to operate within these frameworks.

Deploying and using AI in every facet of a business is incredibly complex — and the churning regulatory landscape makes it clear that it will keep getting more complicated. Companies will need to keep an eye on how regulations evolve, as well as invest in products and work with companies that can help solve the pain points that flare up when pursuing a responsible AI strategy.

Getting started on the trustworthy and responsible AI journey

Launching a responsible AI initiative is a tricky process, Pandey says. But the first step is to ensure you're not AI-washing, or using AI no matter the use case, but instead, identifying business outcomes as well as where and when AI and machine learning is actually required to make a difference. In other words, where does the business bring differentiation, and what can you offload?

"Just because there's AI everywhere, throwing AI at every problem is expensive and adds unnecessary complexity," he says. "You need to be very particular about where you use AI, as you would with any other tool."

"I definitely believe technology solutions to these problems will come out of the industry."

Once you determine the most appropriate use cases, you must build the right abstraction layers in people, process, software and so on in order to handle the inevitable churn as you build the organizational structure required to use AI in a responsible way.

"And finally, have hope and faith that technology will solve technology's problems," Pandey says. "I definitely believe technology solutions to these problems will come out of the industry. They'll solve for this complexity, for this churn, for the responsible AI framework, for the data leakage, privacy, IP and more. But for now, ensure that you're ready for these evolutions."

Learn more here about the ways Outshift by Cisco is predicting, planning and solving the challenges of the future with transformative technology.

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they're always clearly marked. For more information, contact [email protected].

AI Startup Anthropic Gets $100M To Build Custom LLM For Telecom Industry

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

Anthropic is on a fundraising spree. After its massive series C round in May and subsequent support from SAP, the AI startup, known for its ChatGPT competitor Claude, is raising an additional $100 million from South Korean telecom major SK Telecom (SKT).

According to a press release from SKT, the investment is being made as part of a strategic partnership that will see Anthropic develop a custom large language model (LLM) to meet the needs of the telecom industry. The company had also participated in a May round through its venture capital arm.

"With our strategic investment in Anthropic, a global leading AI technology company, we will be working closely … to promote AI innovation. By combining our Korean language-based LLM with Anthropic's strong AI capabilities, we expect to create synergy and gain leadership in the AI ecosystem together with our global telco partners," Ryu Young-sang, CEO of SKT, said in a statement. 

The round takes the total capital raised by Anthropic to well over $1.5 billion, according to Crunchbase.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Register Now Industry-specific, multilingual LLM from Anthropic

With this engagement, SKT and Anthropic will work together to provide the former's telco partners with a multilingual LLM, customized for different industry-specific needs.

Anthropic will combine its state-of-the-art AI technology, including the Claude assistant, with SKT's deep expertise in telecommunications and Korean language LLMs to build a model supporting Korean, English, German, Japanese, Arabic and Spanish. This model will be fine-tuned for different telco-industry-specific use cases, from customer service, marketing and sales to interactive consumer applications. 

As SKT notes, the approach will not only save the time and effort required to build LLMs from scratch but will give telcos easy access to a model that performs much better than the general models on the market. Jared Kaplan, cofounder and chief science officer at Anthropic, will oversee the project, covering customization and the entire product roadmap.

"SKT has incredible ambitions to use AI to transform the telco industry. We're excited to combine our AI expertise with SKT's industry knowledge to build an LLM that is customized for telcos. We see industry-specific LLMs as having high potential to create safer and more reliable deployments of AI technology," Dario Amodei, cofounder and CEO of Anthropic, said.

Anthropic's approach to generative AI differs from those of rival OpenAI and other competitors in its focus on creating "constitutional AI," that is, AI models whose responses in training are graded according to a specific ethically-based ruleset.

"At a high level, the constitution guides the model to take on the normative behavior described in the constitution — here, helping to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is helpful, honest and harmless," wrote Anthropic on its webpage describing the constitution for its LLM Claude.

Integration with Telco AI Platform

Once the multilingual model is fine-tuned and ready, Anthropic will work with SKT to integrate it into the Telco AI Platform being built to serve as the core foundation for new AI services in the telecom industry, including those designed to improve existing services, digital assistants, and super apps that offer a wide range of services. 

The platform is being developed by the Global Telco AI Alliance, which includes four members: SKT, Deutsche Telekom, e& and Singtel. With a custom version of Claude, each will be able to build and deploy services/apps customized to its respective market and customers speedily and efficiently.

With this round, Anthropic continues to be among the highest-funded startups in the AI space, sitting right behind OpenAI, the Microsoft-backed startup that has raised over $11 billion so far. Other notable competitors are Inflection AI, which has raised nearly $1.5 billion, and Adept with $415 million in the bag.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Bridging The Tech Gap: How To Make AI Accessible For Hourly Workers

Sean Behr is an experienced executive, entrepreneur, investor and advisor. He is the CEO of Fountain, a high-volume hiring platform.

getty

Hourly workers keep the world moving. They make our coffee, clean our buildings and haul our packages across the country so we get them on time. These pillars of our economy often face a technological disadvantage when applying for jobs that hinge on tech-dependent hiring processes. After all, application processes for hourly roles look a little different.

And while AI has taken the corporate world by storm in recent months—promising to help companies reduce the time and costs associated with filling roles—it brings to light the question of equity and accessibility. If these 82.3 million hourly workers don't have the means to apply for certain jobs, can hiring really be considered a fair practice? And even for those who can navigate today's hiring processes, how can recruiters ensure the process is free from bias and discrimination?

Using AI to screen job applicants, for example, might inadvertently create a barrier for hourly workers who don't have access to the technology or the technical skills to move through advanced applicant tracking systems.

I'm going to break down the issues concerning ethics and accessibility of technology, specifically artificial intelligence, and reveal how recruiters can bridge the gap in hiring equity.

How To Reap The Benefits Of AI While Maintaining Ethical Standards

The best approach to adopting AI as part of your hiring funnel is to see it as a tool to augment human actions, rather than a replacement for your team of recruiters. As a hiring tool, AI has the power to slash hiring time, freeing up recruiters' schedules so they can focus on other things that technology has yet to master, like performing one-to-one interviews with candidates.

AI also helps increase efficiency and recruiter productivity, while enhancing the applicant experience by automating forward movement through the funnel. These benefits can be transformational for hiring teams and create a more hands-off process.

But even with boundless automation, recruiters still need to pay close attention to make sure quality candidates don't slip through the cracks due to faulty filtering configurations. More importantly, they need to conduct frequent audits to ensure the potential for bias and discrimination is practically nonexistent.

According to a study conducted by the Pew Research Center, 47% of respondents said AI could do a better job than humans at evaluating job applicants equally.

While these may be the beliefs of some, recruiting teams would be better off employing a hybrid system, pairing AI with human interaction to make certain the technology is operating fairly. This initially may require more of a time commitment on the part of recruiters, but it's a key step to help talent acquisition teams build an equitable system.

Why Accessibility To Technology Is Essential For All Workers

Although AI can help reduce bias in the hiring process by qualifying candidates based on their skills, qualifications and experience, rather than demographic factors like gender and race, the use of the technology is irrelevant if your ideal applicants don't have the means to apply.

There are millions of hourly workers in the U.S. Workforce who apply, are screened and are hired very differently from their corporate counterparts.

For example, applicants for hourly jobs typically want to start working as soon as possible and expect employers to meet this urgency with a fast hiring process. But in order for this to happen, these applicants need to be able to apply in the first place, a process that has shifted from pen-and-paper applications to tech-leading applicant tracking systems (ATSs).

Hourly workers may not have access to desktop computers or smartphones, which are where most modern job applications live. They also may not always have the luxury of time to spend completing a job application.

Applicants are applying to multiple jobs at the same time, and research from Appcast shows that a staggering 92% of applicants abandon online job applications, potentially due to their length and complexity. This limits access to job opportunities, which can deny certain populations the opportunity to contribute to society and provide for themselves and their families.

To expand access to opportunities in the wake of fast-moving technology, companies that hire hourly workers need to make sure their target applicants have access to and are able to complete the job applications of today.

How To Bridge The Technology Gap For Hourly Workers

Organizations have the ability to close this gap and make applying for hourly jobs easy and accessible for all workers using some of the strategies below.

Mobile Compatibility

For applicants who do have smartphones, a simplified, mobile-optimized application that doesn't require login credentials and doesn't ask applicants to upload or copy/paste a resume can help workers of all technological proficiencies access and apply for jobs. For applicants who don't have smartphones, recruiters should look into enabling a text-to-apply functionality to open the application to an even wider pool.

User-Friendly Interfaces And Intuitive Applications

We may be up to speed on the latest smartphone and operating system updates, but that doesn't mean every worker is on the same level. Whatever application interface you use, make sure it's easy to follow, the directions are clear and candidates are informed when their applications have been submitted.

Language Inclusivity And No Potential For Bias

When writing job descriptions and fine-tuning the user experience of your job application, use inclusive, accessible language to help applicants feel confident and comfortable about applying to your job.

Conclusion

As hiring technology careens forward, we must do everything in our power to ensure none of our applicants are left behind. This is not a solitary function concentrated on one department; collaboration is a cornerstone of achieving organizationwide equity.

Together, we can build a more equitable workforce that creates opportunities for workers of all backgrounds, capabilities and technological aptitudes.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?








This post first appeared on Autonomous AI, please read the originial post: here

Share the post

What is Artificial Intelligence? Types, History, and Future [2023 Edition]

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×