Projects Details

Project Name: Transformer-based Medical Text Classification

Objective: Develop an encoder-only transformer model to classify words in medical texts into categories like symptoms, diseases, treatments, etc.
Design:
Architecture: Utilizes a transformer encoder, optimized for capturing context and relationships within the text.
Focus: Tailored for Hindi language medical text classification, addressing the need for processing regional language data in medical applications.
Features: Leverages pre-trained language models (e.g., BERT, RoBERTa) fine-tuned for the specific classification task.
Tools: Python, TensorFlow/Keras, Hugging Face Transformers.