In our third and final Q&A with AI expert Jacob Robinson, we look at:
AI opt-out features
How to remain up-to-speed with AI
Best practices when using vendors, including global initiatives on AI
Whether your organization needs an AI policy.
Read Jacob's Q&A part one, and Q&A part two.
Should we trust AI opt-outs?
Code Words: Let's look at users of AI inputting information into gen AI tools. This data is stored on servers and might be used to advance the generative AI, unless a user employes AI opt-outs. What caution should be taken, especially around the use of confidential or proprietary information?
Jacob: Let’s start with definitions.
Proprietary language learning models (LLMs) are owned by a company and can only be used by customers who purchase a license. The license may restrict how the LLM can be used. A lot of companies in 2023 didn’t want their employees using the technology (probably to sit and wait, or the fear factor), yet 57% of employees want better training already went ahead and used it on their own regularly.
Open-source LLMs are free and available for anyone to access, use for any purpose, modify and distribute. Many have a terms of use outlined.
With the above in mind:
Opt-outs are dodgy and I typically don't trust them.
Everything is still new and, as we’ve seen before with any tech disruption, it will take at least a few years for the ‘dust to settle’. That’s also why we had the six-month pause and the AI safety summit, in the hopes that we retain control of the tech and not the other way around.
What's not new are stories of people opting out and still having their information leaked.
When information is out there, there is no taking it back.
AI is not a calculator, meaning that when you ask or prompt it 50 different times, it can provide 50 different answers rather than the same answer 50 times.
Getting the latest on AI
Code Words: How do you recommend comms practitioners and marketers stay up-to-date on AI?
Jacob: Immerse yourself in news, conversations, and keep up with the latest. For example, the Writers Guild of America strike was centred around protecting intellectual property, deterring misuse, and protecting human creative capital, amongst other things.
A good rule of thumb is to set aside 30 minutes each day to use the technology. If you’re not sure where to start, ChatGPT is a good place. Follow the right AI Substacks, newsletters, YouTubers, and Medium.com writers.
Working with AI vendors; global AI initiatives
Code Words: Most businesses conduct checks of new vendors to make sure they follow legal requirements and relevant business practices. I’ve heard nothing about vetting vendors of the AI tools we use. So how can an organization make sure AI vendors follow a country’s laws and regulations? Is it possible to check to see if they comply with the required licences? Can organizations ask how an AI business maintains its technology, or if they rely on other AI tools as part of their service?
Jacob: A lot depends on where in world you are. Countries are moving at their own speeds:
The EU is leading the charge, followed by China and the US.
The UK safety summit in late 2023 resulted in the Bletchley Declaration, signed by 28 nations, including Canada, China, the US, and the European Union. Getting a consensus was a step in the right direction. The document acknowledges that there are risks and pledges to explore them. The summit points to a lot more AI safety initiatives globally for awareness, information, and safeguarding. We can expect future reviews of education curriculums.
We will likely see clearly defined and built-in redlines and safeguards to neutralize the AI should something go awry, though this was not mentioned explicitly in the coverage of the summit.
Code Words: Do PR and marketing agencies require policies on AI use? What key elements should be covered in an AI use policy? Should it be developed by a lawyer, or at least in consultation with a lawyer?
Do businesses and agencies require an AI policy?
Jacob: An AI policy is a good idea. Some key elements should include:
Purpose/summary.
Use cases.
Recommendation(s).
Worst-case scenarios.
Vetted prompt library access.
Clear outline of workflow integration.
What is absolutely unacceptable and possible repercussions.
The Reddit rule. If you wouldn't post something on Reddit, don't post it anywhere.
A section on how the industry/sector is responding.
Regular reinforcement and review of the policy.
Ensuring the policy becomes a part of the training and onboarding procedure, regular lunch and learns, etc. so it’s not just a piece of paper that one or two people know about.
Humans need to write the AI policy and get a trusted second opinion to strengthen it.
Lawyers can be useful in certain scenarios, for example, in protecting intellectual property. But always consider the time and cost, and what your needs are before committing to any decision.
Do you have questions about AI for Jacob? Send them to us and we'll forward them to him and publish the answers soon.
Read Jacob's Q&A part one, and Q&A part two.
Comments