You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The reason will be displayed to describe this comment to others. Learn more.
Sorry, the fine-tuned medicine-based model is private for some reason.
However, since the training corpus is provided, you can train your own medicine-based model from any general open-source pre-trained language models such as bert-base (refer to answers here #1
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your reply. I only use YiduS4K datasets and bert_pytorch_Chinese.bin. And I change dict in prepro.py and train.py .But I don't know why the loss is diminishing and other metrics are always 0.0.
---Original---
From: "Kuyi ***@***.***>
Date: Fri, Jan 14, 2022 13:55 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [Schlampig/MedNER] Update README.md (ed8faad)
Sorry, the fine-tuned medicine-based model is private for some reason.
However, since the training corpus is provided, you can train your own medicine-based model from any general open-source pre-trained language models such as bert-base (refer to answers here #1
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you commented.Message ID: ***@***.***>
The reason will be displayed to describe this comment to others. Learn more.
Maybe you could coordinate some hyper parameters such as learning rate、batch size、epoch ... to try it again.
Also make sure that the tokens and their tags are aligned after tokenization, while the model and its vocab file are loaded correctly ...
------------------ 原始邮件 ------------------
发件人: "Schlampig/MedNER" ***@***.***>;
发送时间: 2022年1月14日(星期五) 下午2:42
***@***.***>;
***@***.******@***.***>;
主题: Re: [Schlampig/MedNER] Update README.md (ed8faad)
Maybe you could coordinate some hyper parameters such as learning rate、batch size、epoch ... to try it again.
Also make sure that the tokens and their tags are aligned after tokenization, while the model and its vocab file are loaded correctly ...
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you commented.Message ID: ***@***.***>
ed8faad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
请问,可以获得作者的.ph文件吗
ed8faad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, the fine-tuned medicine-based model is private for some reason.
However, since the training corpus is provided, you can train your own medicine-based model from any general open-source pre-trained language models such as bert-base (refer to answers here #1
ed8faad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ed8faad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you could coordinate some hyper parameters such as learning rate、batch size、epoch ... to try it again.
Also make sure that the tokens and their tags are aligned after tokenization, while the model and its vocab file are loaded correctly ...
ed8faad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.