本站已收录 番号和无损神作磁力链接/BT种子 

[Udemy] Natural Language Processing With Transformers in Python (06.2021)

种子简介

种子名称: [Udemy] Natural Language Processing With Transformers in Python (06.2021)
文件类型: 视频
文件数目: 98个文件
文件大小: 3.29 GB
收录时间: 2022-8-16 22:44
已经下载: 3
资源热度: 178
最近下载: 2024-10-7 00:08

下载BT种子文件

下载Torrent文件(.torrent) 立即下载

磁力链接下载

magnet:?xt=urn:btih:15a0a25219359a8870cb3f3844c0e975a3826772&dn=[Udemy] Natural Language Processing With Transformers in Python (06.2021) 复制链接到迅雷、QQ旋风进行下载,或者使用百度云离线下载。

喜欢这个种子的人也喜欢

种子包含的文件

[Udemy] Natural Language Processing With Transformers in Python (06.2021).torrent
  • 1. Introduction/1. Introduction.mp49.2MB
  • 1. Introduction/2. Course Overview.mp434.38MB
  • 1. Introduction/3. Environment Setup.mp437.25MB
  • 1. Introduction/4. CUDA Setup.mp423.73MB
  • 10. Metrics For Language/1. Q&A Performance With Exact Match (EM).mp418.17MB
  • 10. Metrics For Language/2. ROUGE in Python.mp421.66MB
  • 10. Metrics For Language/3. Applying ROUGE to Q&A.mp433.95MB
  • 10. Metrics For Language/4. Recall, Precision and F1.mp421.02MB
  • 10. Metrics For Language/5. Longest Common Subsequence (LCS).mp414.95MB
  • 10. Metrics For Language/6. Q&A Performance With ROUGE.mp418.75MB
  • 11. Reader-Retriever QA With Haystack/1. Intro to Retriever-Reader and Haystack.mp413.94MB
  • 11. Reader-Retriever QA With Haystack/10. FAISS in Haystack.mp468.09MB
  • 11. Reader-Retriever QA With Haystack/11. What is DPR.mp429.65MB
  • 11. Reader-Retriever QA With Haystack/12. The DPR Architecture.mp414.28MB
  • 11. Reader-Retriever QA With Haystack/13. Retriever-Reader Stack.mp475.25MB
  • 11. Reader-Retriever QA With Haystack/2. What is Elasticsearch.mp423.54MB
  • 11. Reader-Retriever QA With Haystack/3. Elasticsearch Setup (Windows).mp420.9MB
  • 11. Reader-Retriever QA With Haystack/4. Elasticsearch Setup (Linux).mp420.21MB
  • 11. Reader-Retriever QA With Haystack/5. Elasticsearch in Haystack.mp439.02MB
  • 11. Reader-Retriever QA With Haystack/6. Sparse Retrievers.mp420.37MB
  • 11. Reader-Retriever QA With Haystack/7. Cleaning the Index.mp426.45MB
  • 11. Reader-Retriever QA With Haystack/8. Implementing a BM25 Retriever.mp412.55MB
  • 11. Reader-Retriever QA With Haystack/9. What is FAISS.mp442.9MB
  • 12. [Project] Open-Domain QA/1. ODQA Stack Structure.mp46.23MB
  • 12. [Project] Open-Domain QA/2. Creating the Database.mp442.43MB
  • 12. [Project] Open-Domain QA/3. Building the Haystack Pipeline.mp455.8MB
  • 13. Similarity/1. Introduction to Similarity.mp428.25MB
  • 13. Similarity/2. Extracting The Last Hidden State Tensor.mp429.76MB
  • 13. Similarity/3. Sentence Vectors With Mean Pooling.mp432.09MB
  • 13. Similarity/4. Using Cosine Similarity.mp433.86MB
  • 13. Similarity/5. Similarity With Sentence-Transformers.mp423.02MB
  • 14. Fine-Tuning Transformer Models/1. Visual Guide to BERT Pretraining.mp428.6MB
  • 14. Fine-Tuning Transformer Models/10. Fine-tuning with NSP - Data Preparation.mp477.97MB
  • 14. Fine-Tuning Transformer Models/11. Fine-tuning with NSP - DataLoader.mp414.27MB
  • 14. Fine-Tuning Transformer Models/13. The Logic of MLM and NSP.mp426.25MB
  • 14. Fine-Tuning Transformer Models/14. Fine-tuning with MLM and NSP - Data Preparation.mp443.62MB
  • 14. Fine-Tuning Transformer Models/2. Introduction to BERT For Pretraining Code.mp429.26MB
  • 14. Fine-Tuning Transformer Models/3. BERT Pretraining - Masked-Language Modeling (MLM).mp446.71MB
  • 14. Fine-Tuning Transformer Models/4. BERT Pretraining - Next Sentence Prediction (NSP).mp442.08MB
  • 14. Fine-Tuning Transformer Models/5. The Logic of MLM.mp479.41MB
  • 14. Fine-Tuning Transformer Models/6. Fine-tuning with MLM - Data Preparation.mp476.72MB
  • 14. Fine-Tuning Transformer Models/7. Fine-tuning with MLM - Training.mp469.69MB
  • 14. Fine-Tuning Transformer Models/8. Fine-tuning with MLM - Training with Trainer.mp419.88MB
  • 14. Fine-Tuning Transformer Models/9. The Logic of NSP.mp420.88MB
  • 2. NLP and Transformers/1. The Three Eras of AI.mp422.2MB
  • 2. NLP and Transformers/10. Transformer Heads.mp439.82MB
  • 2. NLP and Transformers/2. Pros and Cons of Neural AI.mp432.79MB
  • 2. NLP and Transformers/3. Word Vectors.mp421.73MB
  • 2. NLP and Transformers/4. Recurrent Neural Networks.mp417.1MB
  • 2. NLP and Transformers/5. Long Short-Term Memory.mp46.34MB
  • 2. NLP and Transformers/6. Encoder-Decoder Attention.mp425.17MB
  • 2. NLP and Transformers/7. Self-Attention.mp420.8MB
  • 2. NLP and Transformers/8. Multi-head Attention.mp413.33MB
  • 2. NLP and Transformers/9. Positional Encoding.mp455.53MB
  • 3. Preprocessing for NLP/1. Stopwords.mp423.06MB
  • 3. Preprocessing for NLP/2. Tokens Introduction.mp424.04MB
  • 3. Preprocessing for NLP/3. Model-Specific Special Tokens.mp418.89MB
  • 3. Preprocessing for NLP/4. Stemming.mp417.24MB
  • 3. Preprocessing for NLP/5. Lemmatization.mp410.58MB
  • 3. Preprocessing for NLP/6. Unicode Normalization - Canonical and Compatibility Equivalence.mp416.97MB
  • 3. Preprocessing for NLP/7. Unicode Normalization - Composition and Decomposition.mp420.25MB
  • 3. Preprocessing for NLP/8. Unicode Normalization - NFD and NFC.mp420.02MB
  • 3. Preprocessing for NLP/9. Unicode Normalization - NFKD and NFKC.mp430.42MB
  • 4. Attention/1. Attention Introduction.mp415.79MB
  • 4. Attention/2. Alignment With Dot-Product.mp449.12MB
  • 4. Attention/3. Dot-Product Attention.mp428.99MB
  • 4. Attention/4. Self Attention.mp428.4MB
  • 4. Attention/5. Bidirectional Attention.mp410.78MB
  • 4. Attention/6. Multi-head and Scaled Dot-Product Attention.mp433.83MB
  • 5. Language Classification/1. Introduction to Sentiment Analysis.mp437.53MB
  • 5. Language Classification/2. Prebuilt Flair Models.mp430.71MB
  • 5. Language Classification/3. Introduction to Sentiment Models With Transformers.mp426.92MB
  • 5. Language Classification/4. Tokenization And Special Tokens For BERT.mp455.43MB
  • 5. Language Classification/5. Making Predictions.mp425.97MB
  • 6. [Project] Sentiment Model With TensorFlow and Transformers/1. Project Overview.mp412.51MB
  • 6. [Project] Sentiment Model With TensorFlow and Transformers/2. Getting the Data (Kaggle API).mp435.02MB
  • 6. [Project] Sentiment Model With TensorFlow and Transformers/3. Preprocessing.mp462.49MB
  • 6. [Project] Sentiment Model With TensorFlow and Transformers/4. Building a Dataset.mp422.57MB
  • 6. [Project] Sentiment Model With TensorFlow and Transformers/5. Dataset Shuffle, Batch, Split, and Save.mp430.17MB
  • 6. [Project] Sentiment Model With TensorFlow and Transformers/6. Build and Save.mp477.01MB
  • 6. [Project] Sentiment Model With TensorFlow and Transformers/7. Loading and Prediction.mp456.77MB
  • 7. Long Text Classification With BERT/1. Classification of Long Text Using Windows.mp4116.14MB
  • 7. Long Text Classification With BERT/2. Window Method in PyTorch.mp484.94MB
  • 8. Named Entity Recognition (NER)/1. Introduction to spaCy.mp451.64MB
  • 8. Named Entity Recognition (NER)/10. NER With roBERTa.mp459.01MB
  • 8. Named Entity Recognition (NER)/2. Extracting Entities.mp433.53MB
  • 8. Named Entity Recognition (NER)/4. Authenticating With The Reddit API.mp435.63MB
  • 8. Named Entity Recognition (NER)/5. Pulling Data With The Reddit API.mp488.96MB
  • 8. Named Entity Recognition (NER)/6. Extracting ORGs From Reddit Data.mp428.11MB
  • 8. Named Entity Recognition (NER)/7. Getting Entity Frequency.mp418.39MB
  • 8. Named Entity Recognition (NER)/8. Entity Blacklist.mp420.15MB
  • 8. Named Entity Recognition (NER)/9. NER With Sentiment.mp499.88MB
  • 9. Question and Answering/1. Open Domain and Reading Comprehension.mp416.07MB
  • 9. Question and Answering/2. Retrievers, Readers, and Generators.mp428.68MB
  • 9. Question and Answering/3. Intro to SQuAD 2.0.mp425.39MB
  • 9. Question and Answering/4. Processing SQuAD Training Data.mp438.42MB
  • 9. Question and Answering/5. (Optional) Processing SQuAD Training Data with Match-Case.mp430.1MB
  • 9. Question and Answering/7. Our First Q&A Model.mp445.71MB