How to safely implement AI in an organisation

How to safely implement AI in an organisation

The uncertainty associated with the use of artificial intelligence is balanced by the willingness to maximise benefits. Numerous companies have decided to implement systems utilising artificial intelligence in their organisations. However, it is often the case that the initiative is taken over by the employees, who incorporate such applications into their everyday work. AI tools should be implemented upon consideration of a number of aspects, such as the aims, uses, compliance, cybersecurity, and human or financial resources. And so, it is essential that such implementation in an organisation should be preceded by a process of gathering information about the requirements of potential users and also by introduction of internal rules and regulations.

If artificial intelligence is treated as a tool to build competitive advantage of a company, the implementation should be given time, so that any IT and law-related risks are minimised. Systems using artificial intelligence mechanisms support a broad spectrum of activities in an organisation, such as marketing, HR, R&D, creation and implementation of products or customer service. The use of artificial intelligence is also on the rise in such vulnerable areas as business – including financial – decision-making that is of strategic importance to the company.

Artificial intelligence-based systems, in spite of their impressive potential, are not, however, without imperfections. The awareness of these limitations is the first step towards safe implementation.

Managing human resources vs. system bias

Companies have started to use AI in the area of human resources management. With recruitment processes now being supported by AI, their duration can be shortened (e.g., with a quick analysis of needs and then verification of large numbers of resumes) and candidates can be matched more accurately. A considerable potential is also identified in the area of employee trainings. Personalising and individualising needs in this context is an incredible asset.

On the other hand, HR departments, having access to large amounts of sensitive and confidential data, are exemplary of two problems, namely the bias and personal data protection issue.

AI can process prejudices introduced to it while assessing candidates and employees. An example can be a situation where the algorithms had preference for candidates of a specific sex, which in consequence led to other candidates being discriminated against. Such slip-ups can lead to eliminating desired candidates based on an insignificant assessment factor.

Apart from that, any irregularities in gathering and processing personal data can pose a risk of violating applicable laws.

To sum up, the systems used need to be monitored for bias, data should be kept secure and recruitment and employee assessment practices need to be transparent. And it should not be forgotten in the end that the internal processes and guidelines need to be integrated with, e.g., regulations about innovation or labour.

Supporting decision-making processes vs. explainability and transparency

Analysing large amounts of data in a short time and efficiently identifying trends and patterns are actions where artificial intelligence is way ahead of what humans are able to. At the same time, there is a discussion about the ethics in the context of artificial intelligence and in particular about its explainability and transparency. An example of a problem that attention should be drawn to is the so called ‘black box’, i.e. a system whose functioning is incomprehensible and impossible to be clearly explained by a human.

Although it seems that the above problems relate primarily to technological aspects, they cannot be overlooked by organisations using specific tools. For instance, when it is necessary to demonstrate how a company took individual decisions, e.g., in a loan process. And it also should be borne in mind that the systems are as good as the data they rely on. Thus, if the data was incorrect or biased, as mentioned above, it may turn out that the decisions taken on their basis were wrong. Consequently, any resolution to automate some decision-making processes should include human supervision and control. And even a well prepared AI prompt will not replace years of experience in business and an in-depth knowledge of the company or understanding of customer needs.

More efficient content generation vs. hallucination

Being present in many markets in the world, with immediate access to personalised information, companies search for ways to streamline the creation of various types of content, be that analyses and reports or promotional materials and ideas for products. Huge effectiveness should not obscure quite a high risk of losing control over the content and of the so called ‘artificial hallucination’. The latter is the case when the algorithm generates false information, if it does not know or is unsure of the answer to the question asked.

An employee, unaware of the limitations of an application, uses the content generated by the chatbot without verifying its correctness, especially because the information seems to be credible. This, however, can lead to some severe consequences for the company. Some US lawyers came to realise this when they were penalised for quoting fictitious court decrees in a writ they filed with a court. They did not make the effort to verify the content generated by one of the chatbots and thus put themselves at risk of not only financial consequences but also reputational damage.

Although the problem of hallucination cannot be completely eliminated at this stage of AI development, the users are not left without any chance to defend themselves against the unreliability of the chatbots. Educating how to create precise prompts and spreading awareness of the need to have limited trust in AI-generated content contribute to safer use of the available tools. Enterprises should ensure that mechanisms of quality control and supervision of the algorithm-created content are in place.

Personalised marketing and sales vs. information bubble

Marketing and sales departments eagerly and extensively enjoy the benefits of artificial intelligence. Personalised ads or recommendations have already become standard in effective campaigns and improving customer experience. Enterprises are additionally bringing down their costs by using automation for customer service, e.g., by using chatbots.

When opening a streaming service, logging again to a favourite store or using social media, the recipients can see the content that is almost fully tailored to their needs or world view. On the one hand, in the context of the flood of information and data, this is a great advantage, but on the other hand, this also entails a risk of the so called information bubble being created.

Companies, if they wish to remain credible and maintain the trust of their clients, should ensure respect for privacy while at the same time keeping the personalisation ethical. Also, in this sphere, it is key that the data about the clients is used in accordance with applicable regulations. In the context of ethics, it should also be remembered that there are situations where artificial intelligence is used for contacts with clients or employees. And so it is absolutely necessary to duly inform them that they interact with a chatbot and not a human being.

Product development vs. artificial intelligence

Generative technologies are becoming more and more ingenious and autonomous when designing new products or developing services and even when supporting the creation of inventions. This leads to remodelling of the approach to the legal protection of company interests, and especially in the sphere of copyright or patent law. Entrepreneurs who use intellectual property rights to build their market position are particularly interested in the impact of artificial intelligence on that sphere.

A key issue here is the way we understand the creator persona, which was discussed for the first time in connection with the DABUS system. In the patent applications by Stephen Thaler this system was indicated as the sole inventor. Some time afterwards there were other attempts to grant the status of creator to artificial intelligence, such as the application to enter the works created by the app called Midjourney in the US copyright register.

The impact of technological advancements, as far as AI is concerned, on the development of new solutions of technical nature and in the research and development projects, or on the generation of music, images or text, is one of the numerous challenges in the context of law. Using AI significantly increases the risk for a company of creating a new product or a marketing campaign that are based on someone else’s works, which in consequence means infringement of third party rights. At the same time, companies can to a lesser extent protect their own works that may be essential for the enterprise, such as, for example, AI-created corporate design.

So far, most offices having competence to decide about applications of such type take a clear stance that only a natural person, i.e. a human, can be a creator as under the current law. The term ‘work’ is now treated with caution. Things created by artificial intelligence systems do not meet legal prerequisites and should rather be treated as ‘products’, whose protection raises doubts in light of the copyright law. This is why it is worth taking time and seeking legal advice in order to find internal solutions and train employees and thereby minimise legal risks associated with using new technologies.

Liability for infringement

There is a lot of talk about intellectual property rights in the context of artificial intelligence also because of the increasing number of court actions for infringements of exclusive rights. OpenAI, Stability AI or Meta are only few companies against which actions were taken for copyright infringement. These cases relate largely to unlawful use of copyright-protected works to train algorithms. There are also, however, charges arising from infringements of rights of protection for trademarks, as is, for example, the case with the action taken by Getty Images against Stability AI.

The line of jurisprudence now being shaped before our very eyes will have a major impact on how algorithms are trained and used. Special caution should be exercised by the entrepreneurs using AI-based tools. This is because it may so happen that the company will be held liable for the content that was generated by the algorithm and infringes existing exclusive rights.

Public access to apps and systems using artificial intelligence represents an unexpected breakthrough. This revolution cannot be stopped, which is why instead of thinking about whether or not to use the AI-based tools to support companies in their operational and strategic activities, we need to focus on how to do that in a safe way.


Małgorzata Furmańska, legal adviser, JWP Patent and Trademark Attorneys

Dorota Rzążewska, legal adviser, patent and trademark attorney, mediator JWP Patent and Trademark Attorneys

Last Updated on December 4, 2023 by Anastazja Lach