cosplayer in own made cylon costume in an office setting, how cool is that?
AI is coming for your job!
/A
Martijn Veldkamp
/A
March 15, 2024
There were some awesome news articles about AI and the new stuff they could do:
Devin a generative AI powered coding engineer from Cognition OpenAI the company behind ChatGPT has been working with a robotics firm Figure to develop humanoid robot There are quite some releases around AI and Project Management
Yes I’m skipping SORA, enough has been said about it. I love these new developments and the way they push us to think about where do we want to position these? Is AI coming for our jobs?
Marketing buzz
Since the first iterations of ChatGPT you have to have something with AI in your product or it doesn’t count. For example, there are other AI companies that have similar product like Devin but not the marketing savy to create such a buzz. So what is new? They have found out that startups still need to make the product look good and easy to use. They need to showcase the problem area that this can help with.
That humanoid robot triggered a lot of great discussions. Could it replace a reception desk? Or replace an ordering tablet that a lot of fastfood shops have? In some restaurant you already have robots helping with carrying the dishes.
What do we want to solve with the humanoid shape? Address trust issues? Better fit in our current infrastructure as in our houses? I do not know, but love that as a nerd I read a lot of Science Fiction books that addresses some of these concerns.
I robot by Isaac Asimov
Marketing buzz around AI often emphasizes the transformative potential of artificial intelligence. Everybody is showcasing how innovative their applications are and trying to generate excitement about its capabilities.
However, amid the hype, it’s crucial to consider how, when and where they can actually being implemented and integrated into your business operations.
We need to think beyond the flashy headlines and buzzworthy announcements.
Outsourcing Tasks
Beside the humanoid robot all these new and fantastic tools are focusing on the tasks at hand. What is (still) key in doing IT projects and implementations? The Human element! There are still business stakeholders and problem owners. Just with any new tool, you have to answer for yourself: “What part of the project will we do with AI?” Or “Where does AI fit and help reach the desired outcome faster?”
It could be tooling that helps typing out notes, creates action items and Jira tickets. Followed by tooling that picks up said tickets and writes or updates code for it.
But who owns the outcome? Who will they look at (blame), if the wrong price is displayed on the website or app? For now it is extra tools in our toolboxes and being more productive. We are still discovering where and how. Anybody approving pull requests?
Sustainability?
Next to the high energy demands that the cloud infrastructure needs to run these VLMs, LLMs etc there is also a question about the source of material needed to train these models. There is a great need for accurate information in order for these models to work.
What happens with the quality of accessible information that is needed to train these models? If the public space is being flooded with AI imagery like people having 6 fingers on one hand, or Tik toks where every voice sounds like a computer.
The source of the material needed was made extra obvious when the CTO of OpenAI’s during an interview stated she doesn’t know where the data they used to train the text-to-video model Sora AI comes from. Or she didn’t want to say.
There is still a lot of potential risk involved. What if a class action lawsuit of Data Theft is successfully started?
European AI Act
Not just us normal people wonder about AI and how far do we allow it to go. The EU Parliament has approved the world’s first comprehensive AI regulations, aiming for safe and ethical AI development in the European Union.
The world’s first comprehensive AI law. It aims to address risks to health, safety and fundamental rights. The regulation also protects democracy, rule of law and the environment.
The Act has some significant implications for the development and deployment of AI within the European Union.
It establishes the world’s first regulatory framework for AI. It sets standards and guidelines for the development, deployment, and use of AI systems within the EU.
The Act also introduces a risk-based approach to AI regulation. It categorizes AI systems into different risk levels based on their potential impact on safety, fundamental rights, and societal well-being. High-risk AI systems, such as those used in critical infrastructure, healthcare, or law enforcement, will be subject to more stringent requirements and oversight.
The European AI Act prohibits AI practices that pose significant risks to individuals or society. This includes the use of AI for social scoring systems that could infringe on privacy or manipulate behavior, as well as techniques that undermine human autonomy or decision-making.
The Act emphasizes transparency and accountability in the development and deployment of AI systems. Developers and users of AI technologies are required to provide clear information about how AI systems operate, including their capabilities, limitations, and potential biases. They are also held accountable for the outcomes of AI systems, including any harm caused by their use.
It also addresses concerns related to data quality and bias in AI systems by requiring developers to use high-quality data sets that are representative and diverse. Developers must also implement measures to mitigate biases and ensure fairness in AI decision-making processes, particularly in high-risk applications such as recruitment or lending.
Not unimportant, this includes the designation of national competent authorities responsible for overseeing AI compliance within member states, as well as the imposition of fines and penalties for non-compliance.
Overall, the European AI Act represents a significant step towards establishing a regulatory framework that balances the benefits of AI innovation with the need to protect individuals and society from potential risks and harms associated with AI technologies. It reflects the EU’s commitment to promoting responsible AI development and ensuring that AI serves the public interest.
I totally agree with Eric Loeb/A blog post/A.
At the same time we al are witnessing the meteoric rise of generative AI. This is a technology and a sector that is moving incredibly quickly. I think that this will be only the beginning of a much needed regulatory regime building that will need to be adaptive and agile in its own right.
Security
With all new technology leaps there are still growing pains and securing these new services needs to remain at the top of our minds.
Salt Security discovered various ChatGPT plugins had critical security flaws. These plugins allow the AI tool to access other websites and perform certain tasks, such as committing code in GitHub and retrieving data from Google Drive.
With these flaws, threat actors could have taken over third-party accounts, and accessed the sensitive data therein. The flaws have since been remediated.
Luckily we have great initiatives/A like OWASP that can help us address these new AI Cyber security risks.
Salesforce’s approach is secure by design :
Trust Layer
Conclusion
Is AI coming for our jobs?
Let’s be honest. The impact of AI on jobs is not uniform across industries or job roles. It varies depending on factors such as the nature of the work, the level of task automation possible with current AI technology, and the adaptability of the workforce. Here are some key points to consider when discussing the varying impact of AI on jobs:
There is great automation potential. Certain industries and job roles are more susceptible to automation by AI than others. Like I stated before. Part of our jobs that involve repetitive tasks, data processing, or routine decision-making are more likely to be automated. And we already have been doing that for quite some time: With Salesforce we advise Customer Service to establish chatbots to handle routine inquiries, so your employees can handle the value add ones.
AI can also augment human capabilities. Rather than replacing them entirely. AI technologies already assist in performing tasks more efficiently or handling complex data analysis. For instance, in healthcare, AI tools help doctors analyze medical images faster and more accurate./A
At the same time AI will create or shift towards new jobs. It creates new job opportunities in emerging fields related to AI development, implementation, maintenance and governance like the new EU AI Office. These jobs may require skills such as data analysis, machine learning, and AI ethics. For example, the demand for data scientists, AI engineers, and cybersecurity experts has increased with the rise of AI technologies.
There is also impact on Skills and Training. The adoption of AI often necessitates reskilling or upskilling the workforce to adapt to changing job requirements. Jobs that require our uniquely human skills such as empathy, emotional intelligence, and critical thinking are less likely to be automated and may even become more valuable in the AI-driven economy. But then again with Salesforce and our 3 releases a year you always need to stay relevant and update your knowledge. AI is just another step in our learning journey.
Acknowledging the complexity of the impact of AI on jobs is crucial for developing effective policies and strategies to address potential challenges and capitalize on opportunities for your company. It requires considering the nuanced dynamics within your industry and specific job market rather than adopting a simplistic “AI will replace all jobs” narrative.
The impact of AI on jobs will vary significantly between sectors. Industries like finance and retail will experience significant disruption due to AI-driven automation and digital transformation, while sectors such as healthcare and education may see more opportunities for augmentation and innovation.
Leave a Reply