Tuesday , 8 October 2024
Home AI: Technology, News & Trends When AI Joins as New Employee, Who Bears Responsibility for Work Mishaps?

When AI Joins as New Employee, Who Bears Responsibility for Work Mishaps?

231
AI staff

In the era of the digital economy, the role of humans is changing. As AI becomes a new employee of the enterprise, the collaboration between humans and AI will gradually become the norm in the production process. When the collision of carbon-based and silicon-based intelligence occurs, how can new collaborative models be explored to improve overall efficiency? And if the collaborative tasks between humans and machines “mess up,” how should the responsibility between humans and AI be defined?

Working with AI Requires the Optimal Matching Mode

What can organizational management do to make human-machine collaboration systems better?

Research in this field is still in its preliminary stages, and most companies are still exploring, with fewer mature related studies. Based on the current understanding of the author, organizational management in human-machine collaboration can be summarized into three main aspects: matching management of enterprise tasks with human-machine systems; adjustments to organizational management methods after AI employees are integrated into the organization; and responsibility management between humans and AI within the system.

Firstly, what needs to be considered is the matching management between job tasks and human-machine collaboration modes. The heterogeneity of users can affect the effectiveness of human-machine collaboration. Additionally, the nature of different tasks also imposes different requirements on different human-machine collaboration modes. Therefore, organizations need to adjust the proportion of humans and AI in specific tasks according to the attributes of the tasks.

Currently, the generally accepted task matching management model mainly includes three dimensions: computability, subjectivity, and complexity.

From the perspective of computability, if the task itself requires a large amount of computation beyond the processing capacity of the human brain, then letting AI take the lead may be more effective. For example, when planning a route from point A to point B, humans may have experience but can only choose the optimal route from a few known paths; AI, on the other hand, can enumerate all possible paths and find the optimal solution, resulting in better outcomes.

From the perspective of subjectivity, if the task is highly subjective or requires flexible adaptation, such as medical services or catering services, humans may be more suitable to play the leading role.

From the perspective of complexity, complex decision-making scenarios require consideration of more factors, and the relationships are more intricate. For example, in complex tasks such as food delivery scheduling systems or spatial simulation calculations, humans may easily overlook relevant factors, thus affecting the quality of decisions. In such cases, increasing the proportion of AI may be more advantageous.

For example, a recent study found that tasks related to creative evaluation are relatively vague and subjective. Even with the support of theoretically supported explainable AI, machines cannot mimic the judgments of human experts, especially in screening out particularly outstanding ideas. Such tasks require human leadership.

However, if all creative evaluations are done by human experts, it may lead to too many cases, making the experts feel bored and tired, thus affecting their ability to make objective judgments.

Therefore, the study proposes a solution: AI can first screen out low-quality ideas, reducing the workload and boredom of experts. This way, experts can focus more on evaluating and selecting excellent ideas.

A young teacher from Fudan University School of Management has also conducted an interesting related study. He found that when task outcomes mainly depend on luck, such as course selection, entertainment card drawing, blind box consumption, people tend to choose services provided by artificial intelligence. This is because people believe that AI has better luck than humans and is more likely to bring about ideal results. However, for tasks that require certain abilities, such as financial management and healthcare, people tend to choose systems led by humans.

This finding to some extent confirms that companies need to match different types of tasks with different human-machine collaboration systems.

Does AI Enhance or Erode Work Skills?

With the addition of AI, the composition of enterprise employees has shifted from traditional social beings to a combination of social beings and AI robots. In this case, enterprises will face new management challenges, including how to manage AI employees and how to adjust organizational strategies to accommodate the addition of AI employees. Many studies have found that after cooperating with AI, employees’ perception of fairness and emotions undergo significant changes.

For example, the introduction of intelligent auditing may cause employees to feel anxious and insecure, inhibiting their willingness for independent innovation, thereby affecting their performance. In this context, many scholars in the field of organizational behavior have begun to study how the introduction of AI employees affects employees’ psychological states, teamwork, leadership, as well as human resources planning, recruitment, training, and management processes.

According to two recent research findings, this latest study explored how employees adapt to changes in their work after the introduction of intelligent knowledge management systems in enterprises. The study found that there are two adaptation mechanisms for employees when collaborating with AI-supported systems.

The first is the utility maximization mechanism, where employees maximize their use of AI to enhance their cognitive abilities, thereby improving their work performance. The second is the interference minimization mechanism, where AI disrupts employees’ original workflow, causing them to feel role conflicts, which in turn impairs their performance.

Further analysis of the research found that new employees who adopt the utility maximization strategy experience the fastest performance improvement, while old employees who adopt the interference minimization strategy also perform relatively well.

Therefore, it is recommended that after introducing such AI systems, companies should encourage new employees to adopt the utility maximization mechanism to deal with AI systems. In other words, encourage them to use AI to learn new knowledge to improve work performance. For those old employees who are familiar with the original workflow, organizations should help them adjust their work frameworks and processes through AI to reduce role conflicts and alleviate the interference caused by AI.

Another related topic is the management of changes in employees’ work skills during human-machine collaboration. Once AI takes over tasks that require strong computational, repetitive, and structured abilities, theoretically, humans can engage in more meaningful and creative work to promote the improvement of their skills. We call this phenomenon “AI-induced skill enhancement.”

For example, for programmers, with the assistance of AI, they can spend more time thinking about the business logic of programs rather than spending most of their time fixing errors in programs. However, for some knowledge-based employees, AI may not necessarily promote skill enhancement; instead, it may lead to a trend of “de-skilling.”

A study in 2023 showed that after doctors used diagnostic assistance systems, the accuracy of their independent diagnoses significantly decreased. Therefore, the use of AI-assisted diagnostic systems actually led to the “de-skilling” of doctors. Another study also found that the introduction of surgical robots greatly reduced the need for actual operation training for resident physicians, thereby reducing the manual skills of doctors.

Now resident physicians need to adopt different methods from before to improve their manual skills. A medical scholar proposed a method called shadow learning, which aims to cultivate physicians’ manual skills through early specialization, abstract rehearsal, and supervised practice.

Who is Responsible When Tasks Fail, Humans or AI?

In addition to motivating and training employees to better adapt to the environment of human-machine collaboration, a greater challenge for enterprises is how to determine the responsibility of humans and AI, especially when services fail, who should bear the responsibility.

The responsibility issue after unmanned vehicles accidents has previously sparked widespread discussion. The mainstream view is that AI systems are essentially machines and cannot bear legal responsibility, so the accountable party should be the person or organization behind the system. However, some studies believe that as algorithm transparency increases, AI can be held responsible to some extent for what it does.

Therefore, when humans and AI work together to complete tasks, how enterprises allocate corresponding responsibilities becomes a highly concerned issue. In other words, whether enterprises are willing to take active responsibility for AI failures will affect the adoption of human-machine collaboration systems and ultimately performance.

However, this issue involves legal, policy, technological, and related management research, and although there is much discussion, mature and reliable research is actually rare. The research team of Fudan University is currently trying to conduct an exploratory study in this area, and the scenario of the study is an internet medical platform.

As more and more internet medical platforms adopt generative artificial intelligence to provide AI consultation services, whether platforms need to provide support for the responsibility of AI consultations has become a very interesting research topic.

Through this study, they hope to explore whether internet medical platforms are willing to take responsibility for AI consultation services, and how this decision affects patients’ willingness to use the platform.

The research team of Fudan University plan to conduct research in two different scenarios: pure AI consultations and mixed consultations with AI and doctors. We will regulate patients’ perception of platform responsibility in certain ways and examine their trust and willingness to use the platform. During the research process, also introduce the neutrality of perception of AI consultations and doctor consultations as moderating variables.

Our theoretical hypothesis is that under the same conditions, when AI is willing to take responsibility for the quality of consultations and patients perceive AI consultations as more objective and neutral, patients are more willing to use the AI consultation function of internet platforms. This study is still ongoing, and look forward to sharing more results in the future.

In summary, Latest suggest that enterprises match different collaboration models according to different tasks and reconsider the incentives and training issues for AI and human employees working together, as well as manage the responsibilities of AI systems. Finally, fully utilize the advantages of both humans and AI, learn from each other, and thus improve the overall efficiency of human-machine collaboration.

Related Articles

AI cost 1

How to Reduce Costs to Make AI More Accessible?

Ten years ago, developing DigiOps and AI was only affordable for large...

AI-Generated Virtual Worlds

The Rise of AI-Generated Virtual Worlds: Shaping the Future of Digital Experiences

Artificial intelligence (AI) is no longer confined to simple applications like voice...

Future of Technology

The Future of Technology: Trends Shaping Our World in 2024 and Beyond

In 2024, the world of technology is advancing at a pace previously...

Search tool 1

Top 5 Reverse Video Search Tools for Getting Accurate Results

Have you ever stared at a video and wondered who originally posted...