Gary Jackson, CGI Federal

Gary Jackson

DIrector, Consulting Expert

The future of artificial intelligence with generative AI

As a consulting organization, envisioning an AI-driven future is relatively straightforward. The concept of AI promises to enhance virtually any process or system. However, when you factor in the psychology of analytics and how humans will respond to AI – envisioning and, more importantly, experimentation gets much more complex. 

The current media hype surrounding AI primarily focuses on the “what if” scenarios that AI could bring. However, much of the current envisioning involves visualizations in images or videos generated by AI models. 

Using neural networks to build images and videos from prompt engineering is an incredible innovation. Nevertheless, the question remains: how does this innovation truly benefit the enterprise? The ease with which pictures and videos can be created should raise concerns within enterprises before it opens the door to opportunities. 

While AI experimentation is intriguing, its current applications appear highly niche and may not fit mainstream enterprise needs. Let’s look at these more applicable, mainstream uses for AI and how you can advance your AI strategy beyond experimentation.

Where is the value of artificial intelligence?

When experimenting with AI use cases, looking at opportunities that will impact the bottom line and drive business value is important. For example, an enterprise could benefit from AI applications that automatically log your CRM entries and assist with building closer customer relationships to the point of closing a deal. 

Additionally, AI-generated follow-up emails can be created using applications that allow Natural Language Processing to embed chatbots into marketing landing pages. An enterprise could also benefit from AI-led real-time searches and data capture for complex sales forecasting and reports. 

You should consider three AI experimentation rules when looking to advance along your AI journey and ensure you don’t get caught up in the hype:

  1. Make AI cheaper to run
  2. Earn credits from your AI training sources
  3. Explain why you trust your AI

Make AI cheaper to run

The cost of infrastructure to support AI is skyrocketing. Anyone doing AI experimentation cannot guess from month to month what its cloud computing and storage expenses will be. This includes hardware, software, labor, and factors such as the amount of storage for training data, the complexity of your AI model, and so on. 

However, this is if you are focusing on centralized AI infrastructure. That means relying only on hyper scalers like Google, Amazon, and Microsoft or going with highly boutique infrastructure providers. Yet, there is still a cost for using the Graphics Processing Units (GPU), whether they are being utilized 100% or are idle. 

A new trend to bring down the cost of AI experimentation is democratizing AI infrastructure and GPU decentralization. 

This solution uses a blockchain marketplace for idle GPU computing; the network allows AI developers to scale next-generation rendering work at fractions of the cost and orders of magnitude increases in speed compared to the centralized GPU cloud.

Many companies offer decentralized GPU rendering platforms where you get to run your AI workloads and generate revenue for the infrastructure you share with the community. Adding your idle laptop or server into the network so that others can use your computing power allows you to keep costs low and generate revenue from using your idle hardware infrastructure. 

Earn credits from your AI training sources

AI systems aggregate and process vast amounts of data to generate outputs, making it difficult to determine the origin of the training data. Authors, artists, and others are filing lawsuits against generative AI companies for using their copyrighted works without permission to train models. 

In 2023, the U.S. Copyright Office began examining AI-related copyright issues, including the scope of copyright in AI-generated works and using copyrighted materials in training data. 

If an enterprise AI solution is found to have used proprietary training data, it may face shutdown and substantial monetary damages. 

A promising solution is emerging that enables tracing the origin of training data and providing properly licensed data feeds as a credited service for AI developers.

Some startups provide transparency by revealing all data sources used for training, their origins, and which sources contribute to issues like hallucinations or copyright violations. This allows developers to remove problematic data feeds and retrain models without starting over. It also facilitates compensating data creators by paying for properly licensed feeds. 

AI data curation could become a lucrative business, transforming potential copyright lawsuits into revenue streams simply by properly sourcing and crediting training data. Alternatively, those providing quality data could barter for free access to AI infrastructure instead of payments, creating a credits-based system.

Explain why you trust your AI

Shared experiences create a common bond between people that helps bring them together. Going through challenges, struggles, or meaningful moments allows individuals to understand each other's perspectives and develop a more profound sense of connection. This mutual understanding cultivates trust, as people recognize they have been through something profound together and can rely on one another. Ultimately, shared experiences provide a robust basis for relationships, unity, and trusted partnerships.

Similarly, AI must reinforce credibility with both the developer who trains it and the user who utilizes its results by explaining its decision-making process and underlying reasoning. 

The key aspect here is explainability – the ability of AI systems to provide understandable explanations for their outputs and decisions.

While some AI algorithms can be complex, making them difficult to interpret, explainability is crucial for building trust in AI systems. As I often emphasize, if complex theories like Einstein's theory of relativity can be distilled into concise explanations like E=mc^2, AI systems should be able to explain their reasoning in an understandable manner.

When developing AI systems, we should not simply allow them to generate outputs without human oversight. Instead, developers should act as editors and curators, ensuring that the data feeding the AI models is relevant and appropriate for the intended use case. Additionally, we should provide transparency by documenting the source data and models used to generate the AI outputs.

Explainable AI is a crucial concept that fosters trust in AI decisions and outcomes. Without explainability, it becomes challenging for humans to understand and trust the AI systems they interact with, hindering the widespread adoption and integration of AI technologies into various aspects of our lives.

The future of AI and enterprise adoption

The future of AI means that our lives will become more accessible and less chaotic. Predicting what humans will do is hard enough. However, predicting what AI will do without knowing what fear or favor controls its decisions is a recipe for an ethical disaster. 

Want to learn more about moving past AI experimentation and expanding to enterprise-wide AI adoption? Read our viewpoint, AI without fear or favor.

About this author

Gary Jackson, CGI Federal

Gary Jackson

DIrector, Consulting Expert

Gary Jackson focuses on business development for CGI TrustedFabric and CGI PulseAI. He also leads the Explainable AI (XAI) and Blockchain Tethered AI (BTA) collaboration with the University of Tennessee Knoxville. He previously led CGI Federal’s Web3/blockchain strategy for its Emerging Technology Practice and its ...