The Virtual Mobility (VM) grant aimed to foster a general comprehension of the ethical, legal, and societal implications associated with the application of Artificial Intelligence in the healthcare domain. This collaboration has enabled to delve into key topics such as data privacy, anonymization, and bias, all within the context of real-world medical scenarios. It provided the knowledge and skills necessary to develop and deploy AI solutions responsibly, ensuring they are safe, fair, and transparent. This initiative fostered collaboration on the topic of responsible AI, leveraging the strengths of both linguistic and computational domains. By merging these two domains, the program sought to give an insight that AI solutions should not just be technically sound but also linguistically precise and culturally attuned. This VM was helpful in order to gain insights into ethical AI concepts, bias detection in medical reports, and linguistic features for bias mitigation. The achievements encompassed a comprehensive understanding of ethical and transparent AI concepts. These concepts are crucial for training Large Language Models (LLMs), especially in the medical field. The VM enabled the discussions on detecting biases in medical reports, thereby contributing to the development of a more equitable AI in the healthcare sector. It also offered an analysis of the linguistic features that can be harnessed for bias mitigation. Furthermore, linguistic patterns that can enhance Natural Language Processing (NLP) models by reducing biases in medical texts were also identified.