Behavioral Health, Artificial Intelligence, And Compliance

Behavioral Health, Artificial Intelligence, And Compliance

The rapid development and use of technology in health care offers significant potential benefits for behavioral health patients, but also raises concerns about ethics and compliance. Recent technological developments include the use of artificial intelligence (AI). Unfortunately, laws, rules and regulations do not evolve as fast as technology. Compliance professionals will want to stay in close contact with departments that are considering the use of AI, including behavioral health, for ethical and privacy concerns. Stigma around mental health still exists and can create problems for people with mental health problems in work and other activities. When compliance collaborates with agencies such as behavioral health, these concerns can be alleviated.

The epidemic has put mental health issues in the spotlight. The Biden administration and the Substance Abuse and Mental Health Services Administration (SAMHSA) plan to address the issues identified. The Biden administration is working to improve mental health insurance coverage, while SAMHSA is strengthening disclosure requirements, especially for substance use disorders. Both programs will explore the use of artificial intelligence as a cost-effective way to address mental health issues.

Functionality of artificial intelligence

So what is artificial intelligence? According to IBM, "AI uses computers and machines to mimic the problem-solving and decision-making abilities of the human mind." [1] It is generally accepted that computers "think" like humans.

According to research, AI appears to offer many improvements in the treatment of mental health problems. A study published in the journal Medical Internet Research found that artificial intelligence "is associated with significant improvements in substance use, self-esteem, food addictions, depression, and anxiety." [2] The authors believe that the advantage of AI is that it can compare and analyze large amounts of data, and increase the "equity and access" of mental health treatment. [3] One of the disadvantages identified is predictability, as ethnic groups lack access to mental health care. Lack of access will result in a lack of data needed for AI analysis, reducing the likelihood of accurately predicting problems in this population. [4]

The few population studies for which data are available have found high accuracy in predicting suicidal ideation as well as major mental health problems. [5] A Vanderbilt University study found that by obtaining information from medical records, demographics, and admissions, AI achieved up to 80% accuracy in predicting whether someone would kill themselves. [6] With all these advantages, artificial intelligence seems destined to follow. At the same time, however, there are many ethical and privacy issues to consider and address.

A moral concern

Dinerstein v. Google . [7] In this case, Matt sued Dinnerstein because his anonymous information was provided to Google as part of an agreement to use the data for a research project. Google has access to information from other applications that allows them to re-identify their data. With electronic health records and applications that collect data on individuals' health information, and smartphones that can determine individuals' geographic location, the ability to re-identify individuals becomes a reality and de-identification a myth. [8]

Another ethical point of view is a study that reveals an organization that uses artificial intelligence to provide advice without informing patients. [9] In this case, a mental health organization used a chatbot to care for 4,000 patients without informing them that a human was not providing that service. [10] There are also concerns that AI may be vulnerable to abuse, whether intentional or not, by altering data input into the system. [11]

As mentioned earlier, there is a gap between technological advances and efforts to keep pace with these rapid changes. The US Food and Drug Administration (FDA) has stated that the current regulatory process is "not equipped to keep up with the pace of change" needed to ensure safety and efficacy. [12] Additionally, the US Department of Health and Human Services' Office for Civil Rights (OCR) and SAMHSA have promised new regulations; After a few years, however, no one came. Compliance experts can only hope that when the new regulations are released, they will also address AI-related concerns. This lack of regulation allows healthcare organizations to recognize their concerns about security, ethics, and privacy, and it is important for organizations to address these issues. For AI to perform these functions, it must analyze data from multiple sources, including patients' medical records. It stores extensive patient information in an electronic environment that is vulnerable to hacking. Ransomware is currently prevalent in the healthcare industry due to the large amount of information available. Credit card companies used to offer an avenue of attack, but changes in regulations and practices have made it harder to access an individual's information. Artificial intelligence can increase not only the amount of data, but also the amount of patient data available, making it a potential target for a ransomware attack.

Studies by many researchers, including the World Health Organization (WHO), have identified a number of ethical and legal concerns. Much of this research followed the Trump administration's executive order covering five areas related to artificial intelligence. [13] The idea behind the executive order was to increase the development and use of artificial intelligence in healthcare. The World Health Organization has identified several concerns related to the use of artificial intelligence and has listed certain ethical principles that should be applied when developing and using artificial intelligence. Concerns mainly center on data biases that could lead to inaccurate information being used by healthcare professionals; The information provided to the end user may not be accurate, but may appear to be accurate and valid; And don't agree to use patient data to train AI systems. [14]

Ethical principles include:

  1. Protect autonomy

  2. Contribute to human welfare, safety and the public interest

  3. Ensure clarity, interpretability and comprehensibility.

  4. Increased responsibility and accountability

  5. Ensure inclusion and equality

  6. Advancing Responsive and Sustainable AI [15]

In other words, anyone using AI needs to address the concerns identified, find ways to reduce or eliminate bad data or inappropriately used data, and start collecting a wider range of data so that the entire population can benefit from AI.

Compliance and AI Risks

How can compliance professionals help guide the ethical use of AI and identify ways to protect patient privacy? Above all, compliance must include policy formulation adapted to the rapidly changing technological environment in the field of mental health. Compliance professionals can also inform about the benefits and pitfalls of artificial intelligence and how to avoid them. The use or potential use of AI should be included in the annual compliance risk assessment and/or enterprise risk assessment. AI must be monitored and audited to ensure the ethical use and protection of patient information. From a privacy perspective, new authentication methods may need to be developed to reduce the possibility of re-identifying a patient using multiple information sources. It is important to monitor OCR regulations to see how to strengthen patient privacy protections and to implement any new regulations as quickly as possible. Working with OCR and SAHMSA for practical and useful procedures and regulations should also be a priority. As compliance professionals, we are responsible for ensuring the ethical conduct of healthcare organizations and keeping our patient information as secure as possible. These steps will help us bridge the gap with the rapidly changing technology environment.

Prepared meals

  • With the increasing use of technology, especially artificial intelligence (AI), the privacy of patient information is being monitored.

  • Current HIPAA regulations have not kept pace with evolving technology.

  • AI appears to be useful in treating some mental health problems.

  • Compliance professionals must collaborate with information security professionals as organizations look to implement the use of artificial intelligence to ensure that patient information is protected inside and outside the organization.

  • Like many technological advances, AI holds promise, especially in behavioral health, but it also poses challenges for compliance professionals.


1 IBM, "What is Artificial Intelligence (AI)?" Accessed October 25, 2023, from https://www.ibm.com/topics/artificial-intelligence#:~:text=Artificial%20intelligence%20leverages%20computers%20and,capabilities%20of%20the%20human%20mind :

2 Jessica Kent, "What role can artificial intelligence play in mental health care?" » Health IT Analytics, 23 Apr. 2021, https://healthitanalytics.com/features/what-role-could-artificial-intelligence-play-in-mental-healthcare.

3 Kent, "What role can artificial intelligence play in mental health care?" » Healthcare IT analysis.

4 Bernard Marr, “Artificial Intelligence in Mental Health. Opportunities and Challenges in Developing Smart Digital Therapies,” Forbes , July 6, 2023, https://www.forbes.com/sites/bernardmarr/2023/07/06/ai - in - Mental Health-Health-Opportunities and challenges-development-intellectual-digital-therapy/?sh=7b1fa3055e10.

March 5 , "Artificial intelligence in mental health. Opportunities and Challenges for the Development of Smart Digital Therapy”.

March 6 , "Artificial intelligence in mental health. Opportunities and Challenges for the Development of Smart Digital Therapy”.

7 Dinerstein vs. Google, LLC, no. 20-3134 (7th Cir. July 11, 2023).

8 Sarah Gerke, Timo Mensen and Glenn Cohen, “Chapter 12 – Ethical and Legal Challenges for AI-Based Health”, Artificial Intelligence in Health (London: Academic Press, 2020). 295-336, https://doi. org/10.1016/B978-0-12-818438-7.00012-5.

9 Sabrina Moreno, “Emergence of Mental Intelligence Raises Fears About Its Wilderness,” Axios, March 9, 2023, https://www.axios.com/2023/03/09/ai-mental-health - fear

10 Moreno, “The rise of AI in mental health raises fears about its ability to take flight. »

11 Gehrke, Mensen, and Cohen, "Ethical and Legal Challenges of AI-Based Healthcare."

12 Moreno, “The Rise of AI in Mental Health Raises Concerns About Its Potential Decline. »

13 Gehrke, Mensen and Cohen, "Ethical and Legal Challenges of Artificial Intelligence-Based Healthcare." »

14 “Artificial Intelligence in Mental Health Research. New WHO study on applications and challenges”, World Health Organization, 6 February 2023, https://www.who.int/europe/news/item/06-02-2023-Artificial-Intelligence in mental health research. new studies on applications and challenges.

15 “Artificial Intelligence in Mental Health Research. WHO New Research on Applications and Challenges”. »

*Barbara Vimont is director of corporate compliance at Parkview Health in Fort Wayne, Indiana.

The future of mental health treatment. Artificial Intelligence Alison and Athena@Wabbot

Tidak ada komentar untuk "Behavioral Health, Artificial Intelligence, And Compliance"