Malaysian-Dataset, We gather Malaysian dataset!
Most of the folders are lack of README, so it is better to read from https://huggingface.co/mesolitica
Proper documentation is available at https://malaysian-dataset.readthedocs.io
Contributors heavily crawled Malaysian websites, you can check out the full list of crawled websites at https://github.com/users/huseinzol05/projects/1
- We catch most of live data from Twitter, Facebook and Instagram using crawlers, So we just search using Elasticsearch query.
- We use Google Translate.
- We use LLM, including ChatGPT3.5, ChatGPT4, Mixtral, LLama3 70B.
- We use Malaya translation, https://huggingface.co/mesolitica/translation-t5-small-standard-bahasa-cased-v2
- Supervised small samples and then trained a base model.
- Trained base model predict larger samples, retrain next student models on high confident labelled data.
- Repeat.
- Generate using ChatGPT3.5, ChatGPT4, Mixtral, LLama3 70B.
- Any missing
mp.py
, get it at https://gist.github.com/huseinzol05/98974ae8c6c7a65d4bc0af9f5003786a - Any missing python scripts, please contact me ASAP or create an issue.
- Please at least email us first before distributing these data. Remember all these hard workings we want to give it for free.
- What do you see just the data, but nobody can see how much we spent our cost to make it public.
- Feel free to contact me to request new dataset.
- Feel free to open an issue if the link to dataset is forbidden, sometime I forgot to make it open to public.
A lot of data here semisupervised / translated / tagged / decoded using third party software, example, Google Translate, Google Speech, so to avoid any future complication, it is better not use this data for commercial purposes but allow for certain research purposes.
Thanks to Im Big, LigBlou, Mesolitica and KeyReply for sponsoring AWS Google and private cloud to deploy distributed crawlers.