Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Deepfake porn could be a growing problem amid AI race



list some ai application areas :: Article Creator

Generative AI Vs. AI

eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Generative AI and AI are both powerful emerging technologies that are reshaping business. They are very closely related, yet have significant differences:

  • Generative AI is a specific form of AI that is designed to generate content. This content could be text, images, video and music. It uses AI algorithms to analyze patterns in datasets to mimic style or structure to replicate different types of content. It is used to create deepfake videos and voice messages.
  • Artificial Intelligence (AI) is a technology that has the ability perform tasks that typically require human intelligence. AI is often used to build systems that have the cognitive capacity to mine data, and so continuously boost performance – to learn – from repeated events.
  • Let's more deeply examine generative AI and AI, lay out their respective use cases, and compare these two rapidly growing emerging technologies.

    Generative AI vs. AI

    Both generative AI and artificial intelligence use machine learning algorithms to obtain their results. However, they have different goals and purposes.

    Generative AI is intended to create new content, while AI goes much broader and deeper – in essence to wherever the algorithm coder wants to take it. These possible AI deployments might be better decision making, removing the tedium from repetitive tasks, or spotting anomalies and issuing alerts for cybersecurity.

    In contrast, generative AI finds a home in creative fields like art, music and product design, though it is also gaining major role in business. AI itself has found a very solid home in business, particularly in improving business processes and boosting data analytics performance.

    To summarize the differences between generative AI and AI, briefly:

  • Creativity Generative AI is creative and produces things that have never existed before. Traditional AI is more about analysis, decision making and being able to get more done in less time.
  • Predicting the future: Generative AI spots patterns and combines them into unique new forms. AI has a predictive element whereby it utilizes historical and current data to spot patterns and extrapolate potential futures in very powerful ways.
  • Broad vs. Narrow: Generative AI uses complex algorithms and deep learning and large language models to generate new content based on the data it is trained on. It is a specific and narrow application of AI to very creative use cases. Traditional AI can accomplish far more based on how the algorithms are designed to analyze data, make predictions and automate actions – AI is the foundation of automation.
  • Also see: Top Generative AI Apps and Tools

    Now, let's go deeper into generative AI and artificial intelligence:

    Understanding Generative AI

    Generative AI is AI technology geared for creating content. Generative AI combines algorithms, large language models and neural network techniques to generate content that is based on the patterns it observes in other content.

    Although the output of a generative AI system is classified – loosely – as original material, in reality it uses machine learning and other AI techniques to create content based on the earlier creativity of others. It taps into massive repositories of content and uses that information to mimic human creativity; most generative AI systems have digested large portions of the Internet.

    Machine learning algorithms

    Generative AI systems use advanced machine learning techniques as part of the creative process. These techniques acquire and then process, again and again, reshaping earlier content into a malleable data source that can create "new" content based on user prompts.

    Using earlier creativity

    As noted above, the content provided by generative AI is inspired by earlier human-generated content. This ranges from articles to scholarly documents to artistic images to popular music. The music of pop singer Drake and the band The Weekend was famously used by a generative AI program to create a "new" song that received considerable positive attention from listeners (the song was soon removed from major platform in response to the musicians' record label).

    Vast datasets

    Generative AI can accomplish tasks like analyze the entire database of an insurance company, or the entire record keeping system of a trucking company to produce an original set of data and/or business process that provides a major competitive boost.

    Thus, generative AI goes far beyond traditional machine learning. By utilizing multiple forms of machine learning systems, models, algorithms and neural networks, generative AI provides a completely new form of human creativity.

    Also see: Generative AI Companies: Top 12 Leaders

    Generative AI Use Cases

    Generative AI is being used to augment but not replace the work of writers, graphic designers, artists and musicians by producing fresh material. It is particularly useful in the business realm in areas like product descriptions, suggesting variations to existing designs or helping an artist explore different concepts.

    Generate text

    Generative AI can generate legible text on various topics. It can compose business letters, provide rough drafts of articles and compose annual reports. Some journalistic organizations have experimented with having generative AI programs create news articles. Indeed, many journalists feel the threat from generative AI.

    Generate images

    Generative AI can generate realistic or surreal images from text prompts, create new scenes and simulate a new painting. Note, however, that the fact that these images are originally based the images fed into the generative AI system is prompting lawsuits by creative artists. (And not only graphic artists, but writers and musicians as well.)

    Generate video

    It can compile video content from text automatically and put together short videos using existing images. The company Synthesia, for instance, allows users to create text prompts that will create "video avatars," which are talking heads that appear to be human.

    Generate music

    It can compile new musical content by analyzing a music catalog and rendering a similar composition in that style. While this has caused copyright issues (as noted in the Drake and The Weekend example above), generative AI can also be used in collaboration with human musicians to produce fresh and arguably interesting new music.

    Product design

    Generative AI can be fed inputs from previous versions of a product and produce several possible changes that can be considered in a new version. Given that these iterations can be produced in a very short amount of time – with great variety – generative AI is fast becoming an indispensable tool for product design, at least in the early creative stages.

    Personalization

    Generative AI can personalize experiences for users such as product recommendations, tailored experiences and unique material that closely matches their preferences. The advantage is that generative AI benefits from the hyper-speed of AI – producing personalization for many consumers in mere minutes – but also the creativity it has displayed in art and music to generative fresh, individualized personalizations.

    "Generative AI is an indispensable ally for individuals who are newly entering the workforce," said Iterate.Ai Co-Founder Brian Sathianathan. "It can serve as an invisible mentor, assisting with everything from crafting compelling resumes and mastering interview strategies to generating professional correspondence and formulating career plans. By providing personalized advice, learning opportunities, and productivity tools, it can help new professionals navigate their career paths more confidently."

    Also see: AI Detector Tools

    Understanding AI

    Artificial intelligence is a technology used to approximate – often to transcend – human intelligence  and ingenuity through the use of software and systems. Computers using AI are programmed to carry out highly complex tasks and analyze vast amounts of data in a very short time. An AI system can sift through historical data to detect patterns, improve the decision-making process, eliminate manually intensive task and heighten business outcomes.

    Also see: 100+ Top AI Companies 2023

    Isolating patterns

    AI can spot patterns among vast amounts of data. It does this using specialized GPU processors (Nvidia is a leader in the GPU market) that enable super fast computing speed. Some systems are "smart enough" to predict how those patterns might impact the future – this is called predictive analytics and is a particular strength of AI.

    Better business decisions

    AI can be used to provide management with possible opportunities for expansion as well as detecting potential threats that need to be addressed. It helps in ways such as product recommendations, more responsive customer service and tighter management of inventory levels. Some executives use AI as an "additional advisor," meaning they incorporate recommendations from both their colleagues and AI systems, and weigh them accordingly.

    Heightened data analytics

    AI adds another dimension to data analytics. It offers greater accuracy and speed to the processes of using data analytics. Used correctly, AI increases the chance of success and achieving positive outcomes by basing data analytics decisions on a much wider volume of data – and ideally higher quality data – whether historical or in real time.

    Through the rapid detection of data analytics patterns, business processes can be improved to bring about better business outcomes and thereby assist organizations in gaining competitive advantage.

    AI Use Cases

    AI has almost limitless use cases – and more seem to crop up every week. Some of the top AI use cases include automation, speed of analysis and execution, chat and enhanced security. Be aware the additional vertical use cases are launching in education, healthcare, finance and other industry sectors.

    Automation

    AI can automate complex, multi-step tasks to help people get more done in a shorter span of time. For instance, IT teams can use it to configure networks, provision devices, and monitor networks far more efficiently than humans. AI is the driver behind robotic process automation, which helps office workers automate many mundane tasks, freeing up humans for higher value tasks.

    Speed

    AI finishes tasks with extraordinary speed. It uses technologies like machine learning, neural networks and deep learning to find and manipulate data in a very short time frame. This helps organizations to detect and respond to trends and opportunities in as close to real time as possible. The amount of data AI can analyze lies far outside the range of rapid inspection by a person.

    Chat

    AI-based chat, and the chatbots it powers, appears to be the app that has finally taken AI into the mainstream. Systems such as ChatGPT and others are introducing chat into untold numbers of applications. Done well, these applications improve customer service, search and querying, to name a few. And the advantage of AI is that, over time, the system improves, meaning that the AI chatbot is capable of ever more human conversation.

    Enhanced Security

    AI harnesses machine learning algorithms to analyze, detect, and alert managers about anomalies within the network infrastructure. Some of these algorithms attempt to mimic human intuition in applications that support the prevention and mitigation of cyber threats. This can help to alleviate the work burden on understaffed or overworked cybersecurity teams. In some cases, AI systems can be programmed to automatically take remediation steps following a breach.

    AI, therefore, is finding innumerable use cases across a wide range of industries. It provides managers with data and conclusions they can use to improve business outcomes. Moreover, AI technology in all of its forms is still in its infancy, so expect the application of AI to uses cases to both broaden and deepen.

    Also see: Best Artificial Intelligence Software 2023

    Bottom Line: Generative AI vs. AI

    Algorithms can be regarded as some of the essential building blocks that make up artificial intelligence. AI uses various algorithms that act in tandem to find a signal among the noise of a mountain of data and find paths to solutions that humans would not be capable of. AI makes use of computer algorithms to impart autonomy to the data model and emulate human cognition and understanding.

    Generative AI is a specific use case for AI that is used for sophisticated modeling with a creative goal. It takes existing patterns and combines them to be able to generate something that hasn't ever existed before. Because of its creativity, generative AI is seen as the most disruptive form of AI.

    "Mainline AI applications based around learning, training and rules are fairly common in support of autonomous operations (vehicles, drones, control systems) as well as diagnostics, fraud and security detection, among other uses," said Greg Schulz, an analyst at StorageIO Group. "Generative AI has the ability to ingest large amounts of data from various sources that gets processed by large language models (LLMs) influenced by various parameters to create content (articles, blogs, recommendations, news, etc.) with a human-like tone and style."

    Also see: ChatGPT vs. Google Bard: Generative AI Comparison


    DataRobot Now Covers Entire Generative AI Lifecycle For Enterprises

    Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

    Boston-based DataRobot, a unicorn startup that offers a platform for enterprise AI development, is going all in on generative AI support. The company today said it is updating its offering with new generative AI-specific capabilities and applied services to give teams an open and end-to-end solution for experimenting with, building, deploying and monitoring enterprise-grade AI assistants.

    The development follows DataRobot's March update and will make it easier for teams to go from concept to value with gen AI. It comes at a critical time as almost every enterprise is under pressure to put the technology to business use and drive impact.

    "I have the privilege of working hand-in-hand with AI leaders and data teams worldwide," Jay Schuren, chief customer officer (CCO) at DataRobot, told VentureBeat. "What I'm hearing from them is that there is a tremendous amount of pressure and demand they are facing right now to get generative AI solutions out to the business. The major impression is that this technology is meaningful (not purely hype), and companies that move fast will have a real competitive advantage."

    However, the effort to move fast and deploy in production is encountering some roadblocks.

    Event

    VB Transform 2023 On-Demand

    Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

    Register Now Solving generative AI problems

    Companies across sectors have moved or are moving to complete their initial generative AI prototypes. However, the reality is that most of them are yet to realize tangible business value from these initiatives. 

    Common challenges, as Schuren explained, include starting with the wrong problem, ecosystem lock-in, maintaining models and vector databases, not thinking through last-mile usage and not fully trusting the generative AI application's outputs. 

    "I've had CIOs tell me how there are new vector databases that have popped up and that they're spelunking through logs to try to figure out who built them and what they're supposed to be doing. I've also heard from CDOs that their teams have a roadmap of use cases but they can't build them fast enough and get them out in a trusted way, and are playing in LLM playgrounds but don't have the right way to get these solutions out in the market," the CCO said.

    To address these gaps, the company is now building upon its existing platform, enabling users to not only build generative AI solutions end-to-end (including solution development, backend hosting/monitoring and front-end hosting/monitoring) with a few lines of code and fewer personas, but also to integrate predictive AI models into these pipelines to audit generative AI outputs. 

    For building and deploying gen AI solutions, the offering is providing a solution framework that allows users to integrate large language models, vector databases and prompting strategies of their choice with internal contextual (typically unstructured) data within DataRobot-hosted notebooks. 

    This will give teams much-needed flexibility to use and compare different LLMs and other generative components to see what works best for their targeted use case.

    Similarly, for building trust in the applications being developed, the offering will provide operational and data drift metrics as well as more specific generative AI metrics like toxicity and truthfulness to ensure applications stay "on-topic."

    "We are bringing the power of predictive sidecar models to validate and audit outputs of generative models. In addition, customers can define their own custom performance metrics that they want to use for monitoring things like truthfulness, topic drift and other use-case-specific metrics, as well as for tracking and monitoring LLM costs to ensure they don't spiral out of control," Schuren noted.

    Finally, to streamline the feedback process and iteration on prototypes, DataRobot will host a Streamlit Application Sandbox. This will allow users to quickly prototype, build and deploy end-to-end applications/assistants to their business stakeholders.

    Applied services bundled

    When using these new capabilities, teams can also take advantage of DataRobot's new enablement-focused applied AI services. This largely covers three areas: training to help leaders establish the level of generative AI proficiency needed to remain competitive; ideation and roadmapping to help teams go from use case ideation to implementation; and a trust and compliance framework to support responsible generative AI development and meeting existing and upcoming regulations.

    "The new generative AI-focused services are offered both bundled and offered separately depending on the customers' needs and the goals of the particular use case. Some teams want enablement throughout the process in the form of training, ideation/roadmapping, some want end-to-end delivery work, while others are looking for trust/compliance frameworks. We work with each customer closely to determine their needs and find the offer that they will be most successful with," Schuren added.

    Available right away

    Teams looking to use the new generative AI capabilities and services from DataRobot can get started right away. The platform includes templated recipes with some best practices, which can later be customized to leverage the components that work best. Many organizations have started testing the offerings pre-launch, including Baptist Health South Florida and FordDirect.

    VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


    From Hype To Policy: How To Approach Generative AI In Practice

    AI has become increasingly accessible to the public, and employees are using the technology in the ... [+] course of their every day work more than ever before.

    NurPhoto via Getty Images

    Whether you've used ChatGPT to create a grocery list or spent time wondering if a robot will take over your job, many of us have become increasingly familiar with the benefits and risks of artificial intelligence (AI). The immediate question people leaders specifically are grappling with now is how to address the fact that employees will increasingly be leveraging AI in the course of their work.

    Despite the hype, many of us in HR are learning that this isn't a time to panic but to adapt. The fact is many of the tools we use in HR processes already have AI components, and that's only going to become more prominent. Further, when it is used responsibly and ethically, AI can have a positive impact on an organization by taking over time-consuming but critical tasks.

    To better understand how to mitigate the inherent risks of AI while allowing room for experimentation and exploration within your organization, I turned to Robert Scott, General Counsel and the SVP of Legal at Lattice. As a lawyer and thought leader in the data privacy community currently working in HR technology, Robert was early to recognize the rising impact of AI in the workplace as access to high-quality AI models spread.

    In our conversation, we discuss Robert's approach to crafting AI policy as well as the use cases and application of AI within high-growth organizations and their risks and benefits.

    You were early to create an AI policy for Lattice, where many organizations may not yet have one in place. How did you approach this policy, and do all organizations need one?

    Robert: Whether or not you have an AI policy in place today, there are people within your organization using AI to do business. To set the stage, I'm a huge proponent of AI and what the technology can help individuals accomplish. I ultimately believe that the benefits outweigh the risks.

    That said, different organizations will have different use cases and risk tolerances. When developing and implementing a policy within your own organization, you'll want to work with your counsel and understand your business needs as they pertain to AI as well as your organization's risk tolerance. This will look different for every organization, but we'll get into some common use cases you may want to consider.

    A good place to start: Understand how different teams within your organization are currently using AI. Conduct a listening tour and determine which use cases you should be trying to restrict or limit, as well as which are low-risk, high-value activities that you want to encourage and help facilitate.

    Obviously, one of the reasons we are discussing this today is that the explosive rise of ChatGPT has made AI tools even more accessible to employees. What are some of the use cases and applications of AI that you see in the workplace today, and how are you thinking about them in terms of risk levels?

    Robert: Some common low-risk, high-value use cases are those which do not require the user to share personal data or proprietary information. Sales outreach is a great example in that an account executive (AE) can share with, say, ChatGPT, what they want to achieve with a prospecting email and get help drafting this communication. We all get writer's block and know what a time suck these emails can be, so using AI really increases efficiency here. Marketing content is similar in that AI is great for ideation and can help explore potential paths for creative content.

    While you probably don't want to go to a large language model (LLM) and ask it to create a new application for you, quality assurance (QA) is a great engineering use case. An engineer can take code they've written and ask AI to debug it, driving efficiency without exposing your organization to a lot of risks.

    Drafting policy and creating presentations are other low-risk, high-value ways I've seen teams and individuals leverage AI.

    One use case that is particularly relevant for People teams, but is a bit murkier in terms of risks and values, is with applicant tracking systems (ATS). Right now, there are tools that allow you to run a video interview with an applicant and get recommendations based on the candidate's suitability for a role. This may seem spooky – and we all know there's a risk of bias in AI – but I'm actually excited for large (and more very risk-tolerant) companies to pursue and experiment with these kinds of tools.

    For most teams, however, I'd encourage you to work closely with legal counsel when using AI for ATS, because there are certainly more regulations there than with other use cases, with more coming down the pike.

    What are some other risks of working with AI that we should be thinking about?

    Robert: Data privacy is top of mind. Lattice has sales operations in EMEA and we're based in California, so we need to think through compliance with the California Consumer Privacy Act (CCPA), and the General Data Protection Regulation (GDPR). These may require opt-in consent in order to use a person's data, so the use of AI would be considered a processing activity.

    Even though AI is not human, sharing your confidential information with the tool could result in the tool somehow repurposing that information or breaching a confidentiality obligation in a customer contract. You could also inadvertently disclose strategic business information. One of the best ways to mitigate this risk is by entering an enterprise contract with your AI vendor of choice and working with your legal counsel to ensure confidentiality protections are in place.

    Another potential risk to consider is intellectual property issues. If you leverage AI to develop an innovation, it may not be patentable. Copyright protection is a bit more up in the air at this point, but I wouldn't count on it. Generally, try to limit the use of AI anywhere where you need innovations or ideas to be protected for your organization.

    Lastly, I wouldn't recommend including AI in any employment decision-making at this point, unless you've worked with your vendors and employment law counsel and are sure your use case will be compliant. There is a lot of opportunity for us to reduce human bias by leveraging AI capabilities, but we're not quite there in practice yet.

    At the end of the day, many of the risks around AI can be managed as long as you put guardrails in place to ensure employees are using AI ethically.

    I've heard from my own networks that many CPOs and other HR leaders are recognizing the need for putting a policy in place around AI, but it can be hard to know where to start. What are some key considerations for leaders to keep in mind when crafting an AI policy?

    Robert: First and foremost, keep it simple. Policies should be developed so people can easily understand them.

    It's important to avoid "don't". Creating a policy that fundamentally says, "Don't do it"—that won't work. Instead, look to restrict or limit use in high-risk areas (of course what these areas actually entail is subjective and will largely depend on the risk tolerance and functionality of your organization). Encourage the use of AI in high-value, low-risk areas such as sales and marketing.

    Allow for ideation and innovation, and encourage employee feedback at every step. Understanding how people want to use AI to advance business interests will help you provide a path for them to do that.

    How do you think about getting opt-in from employees and/or helping folks understand how to keep business information private when using these tools? One of the things you suggested was having an enterprise instance (like ChatGPT Business)—will this make the most sense for those of us thinking about data privacy?

    Robert: ChatGPT made a big splash because it made AI accessible. Everyone started using it before regulations could catch up. So we don't have as many answers about the risks that our organizations are being exposed to. A lot of this work involves keeping up with best practices and trends by doing your research, experimenting with the technology yourself, and working with your counsel to meet the needs of your organization.

    For ATS and the tools that are leveraging AI for employment decision-making, first, figure out what your use case is, and just like any procurement initiative, make sure you understand what business outcome you're trying to achieve.

    There's a lot of sales sizzle around AI tools right now, but keep your eye on the ball—what specific problem do you want AI to address? One example is if you have a retention problem that you think stems from managers not selecting the best candidates. You may want to leverage AI here, but first, screen your vendors with that need in mind, and as part of the procurement process, add a regulatory scoping piece and really drill down with potential vendors to see if they can meet those needs.

    When it comes to AI and confidentiality, how do we make sure we have the necessary protections in place?

    Robert: This will be context-specific, but generally, vendors are evolving to meet business needs in response to those businesses pushing on them and saying, essentially, 'we'd love to use your tool, but if the model can be trained on our data, this violates our confidentiality agreements.'

    Each vendor has different default terms and enterprise terms and in some use cases, your organization may be willing to take on more risk than others. For example, you can imagine a sales representative putting confidential information around a new product launch in a prospecting email, and it's up to you whether or not that's OK—how secretive is the information? The risk of ChatGPT sharing that information is probably very low, but as an organization, you need to set those guardrails both internally and with your vendors.

    For folks who are getting started with an AI policy – where should they start? ?

    Robert: It's so important to initiate the dialogue around AI, and then keep it going. One of the biggest mistakes I've seen teams make is to avoid adopting a policy at all, or adopting a policy and then assuming that the work is done. First, reach an agreement with your key stakeholders around guardrails—which use cases are greenlit, which need additional review, and which you are just not going to pursue, at least for now.

    All of these use cases are dynamic. Your vendors are exploring them as well and some will do a great job at incorporating AI in a compliant and non-biased way, and others might not. So know that you'll need to continuously monitor the situation and keep up the conversation.

    To bring it back to your work at Lattice – it's been a few months now since we put our AI policy in place. What are some of the responses and feedback you've had from the plan so far?

    Robert: The honest answer is that it's been anticlimactic– and that's a good thing. We did our homework and felt confident that the policy we were creating, and the stance we were taking on AI for Lattice, were correct.








    This post first appeared on Autonomous AI, please read the originial post: here

    Share the post

    Deepfake porn could be a growing problem amid AI race

    ×

    Subscribe to Autonomous Ai

    Get updates delivered right to your inbox!

    Thank you for your subscription

    ×