联合开发网   搜索   要求与建议
                登陆    注册
排序按匹配   按投票   按下载次数   按上传日期
按分类查找All 自然语言处理(359) 

[自然语言处理] LA_CRIMES

加利福尼亚州洛杉矶。天使之城。廷塞尔敦。世界娱乐之都!以温暖的天气、棕榈树、绵延的海岸线和好莱坞而闻名,并制作了一些最具标志性的电影和歌曲。然而,与任何人口稠密的城市一样,它并不总是魅力四射,可能会有大量的犯罪。
Los Angeles, California . The City of Angels. Tinseltown. The Entertainment Capital of the World! Known for its warm weather, palm trees, sprawling coastline, and Hollywood, along with producing some of the most iconic films and songs. However, as with any highly populated city, it isn t always glamorous and there can be a large volume of crime. (2024-04-13, Jupyter Notebook, 0KB, 下载0次)

http://www.pudn.com/Download/item/id/1713004753463896.html

[自然语言处理] Text-Analysis-Of-Sports-News-Articles-Using-NLP

通过两个数据集研究体育新闻文章中的性别偏见:来自顶级体育网站的251篇文章和来自Kaggle的31900篇文章。采用NLP和ML模型,我们对受试者(男性-女性)进行分类,并使用情感分析、主题建模和关键词提取等技术分析文本中的偏见。
Examining gender bias in sports news articles via two datasets: 251 articles from top sports sites and 31,900 from Kaggle. Employing NLP and ML models, we categorized subjects ( male female ) and analyzed text for biases using techniques like sentiment analysis, topic modeling, and keyword extraction. (2024-02-25, Jupyter Notebook, 0KB, 下载0次)

http://www.pudn.com/Download/item/id/1708820442433949.html

[自然语言处理] TTNV3AzureIoTConnector

物联网到Azure物联网集线器连接器。使用OpenAPI(NSwag生成的客户端)、MQTT(使用MQTT网)和...
The Things Network to Azure IoT hubs connector. Built with OpenAPI (NSwag generated client), MQTT(using MQTTnet) and Azure IoT client SDK (2022-07-12, C#, 187KB, 下载0次)

http://www.pudn.com/Download/item/id/1686143334610565.html

[自然语言处理] ESP32-LoRa-WiFi-Raw-Ethernet-Packets-Repeater

实现自定义以太网数据包协议发送和接收的项目。还有一种捕获这些数据包的方法...
Project that implements a custom ethernet packet protocol sending and receiving. Also a way to capture these packets promiscuously by the ESP32 WiFi interface, send them via LoRa to another ESP32 module and transmit them again via WiFi, creating a Long Range WiFi Repeater. (2018-07-10, C, 23KB, 下载0次)

http://www.pudn.com/Download/item/id/1686140709939067.html

[自然语言处理] Models-and-Viterbi-in-Natural-Language-Processing

自然语言处理中的隐马尔可夫模型和维特比、使用贝叶斯网的POS标记、隐马尔可夫模型以及使用维特比算法的最大后验(MAP)计算
Hidden-Markov-Models-and-Viterbi-in-Natural-Language-Processing,POS tagging using Bayes nets, Hidden Markov Models and calculation of maximum a posteriori (MAP) using Viterbi algorithm (2017-12-31, Python, 2306KB, 下载0次)

http://www.pudn.com/Download/item/id/1514678724380308.html

[自然语言处理] nlp-stand-up-comedy

nlp单口喜剧,这是一个针对单口喜剧的自然语言处理python项目,从本教程开始[https:www.youtu...](https:www.youube.com watch v=xvqsFTUsOmc)
This is a natural language processing python project for stand-up comedy started from this tutorial <https://www.youtube.com/watch?v=xvqsFTUsOmc> (2019-08-24, Jupyter Notebook, 2549KB, 下载0次)

http://www.pudn.com/Download/item/id/1566585704516222.html

[自然语言处理] -

-,... ,一个是数据处理模块,一个是用户模块。 2. 爬虫抓取模块主要是从直播吧、新浪体育、网易体育上爬取有关足球的新闻和用户关于足球的评论,利用集群HADOOP抓取网页,分析得出URL集,提取特征URL 3. 网页linux脚本过滤...
-,... , One is the data processing module, and the other is the user module. 2. The crawler crawling module mainly crawls football related news and users comments on football from Zhibo8, Sina Sports and NetEase Sports, grabs web pages using cluster HADOOP, analyzes the URL set, and extracts the characteristic URL 3 Web Linux Script Filtering (2019-09-10, Others, 0KB, 下载0次)

http://www.pudn.com/Download/item/id/1568051963354299.html

[自然语言处理] WYIL-communit

WYIL-communit,... 帖子搜索功能进行重构,通过IK中文分词器增加索引和全局索引,实现搜索关键字高亮显示等功能; 对热帖排行模块,使用分布式缓存Redis和本地缓存Caffeine作为多级缓存,将QPS提升了20倍(10-200),大大提升了网...
WYIL-communit,... Reconstruct the post search function, add indexes and global indexes through the IK Chinese word segmentation tool, and achieve search keyword highlighting and other functions; For the hot post ranking module, using distributed caching Redis and local caching Caffeine as multi-level caching has increased QPS by 20 times (10-200), greatly improving the network (2021-08-19, Others, 0KB, 下载0次)

http://www.pudn.com/Download/item/id/1629320523464022.html

[自然语言处理] Fund-review-Crawl-and-analysis

Fund-review-Crawl-and-analysis,这是一个基金评论与股票市场的情感分析项目,目的是手动爬取天天基金网基民评论与东方财富网股市行情的资讯,从基民评论、重仓股票、市场行情三个方面出发,使用情感词典与LDA模型进行分析,从而做出是否值的购买基金的决策。带有标签clean的是...
Fund review Crowl and analysis is a sentiment analysis project for fund reviews and stock markets. The purpose is to manually crawl information from Tiantian Fund Network s funder reviews and Dongfang Wealth Network s stock market trends. Starting from three aspects: funder reviews, heavy stocks, and market trends, this project uses sentiment dictionaries and LDA models to analyze, in order to make a decision on whether to buy a fund for value. With the label clean (2023-04-30, Jupyter Notebook, 9527KB, 下载0次)

http://www.pudn.com/Download/item/id/1682811268690111.html

[自然语言处理] -GRU-am-softmax

-GRU-am-softmax,此代码和论文讲述的是对同种答案的相似问题进行相似性分析。在我的业务中,类似于对应于同种下载app的不同搜索词。在某宝搜索排名业务中,根据搜索词来找出相对应的app
-GRU am softmax, this code and paper describe the similarity analysis of similar questions with the same answer. In my business, it is similar to different search terms corresponding to the same downloaded app. In the search ranking business of a certain treasure, find the corresponding app based on search terms (2018-09-17, Python, 4KB, 下载0次)

http://www.pudn.com/Download/item/id/1537124709126478.html

[自然语言处理] DSProject

DSProject,Course project of Foudations of Data Science 通过Selenium自动化爬取中国司法裁判文书网,借助LTP等第三方库对爬取下来的裁判文书进行分词,可供法律从业者高效率标注案件关键信息,辅助司...
DSProject, Course project of Studies of Data Science uses Selenium to automatically crawl the Chinese Judicial Judgment Document Network, and uses third-party libraries such as LTP to segment the crawled judicial documents. This allows legal practitioners to efficiently annotate key information of cases and assist the company (2022-04-07, JavaScript, 659KB, 下载0次)

http://www.pudn.com/Download/item/id/1649293021517275.html

[自然语言处理] tokenizer

tokenizer,一个简单的中文分词算法,可用于网游脏词过滤、搜索引擎文档解析、自然语言处理等需要中文分词的场合。
Tokenizer, a simple Chinese word segmentation algorithm, can be used for online game dirty word filtering, search engine document parsing, natural language processing and other occasions that need Chinese word segmentation. (2018-07-30, Python, 2KB, 下载0次)

http://www.pudn.com/Download/item/id/1532887233965920.html

[自然语言处理] ZJU_JWW_scores_analyse

ZJU_JWW_scores_analyse,可以将教务网的成绩批量复制粘贴到文本文档中,再上传,由该系统自动帮你将成绩分类,直观显示自己的各部分成绩情况。
ZJU_ JWW_ scores_ Analyze allows you to batch copy and paste grades from the academic administration website into text documents, and then upload them. The system automatically helps you classify grades and visually displays your grades in various parts. (2015-12-16, PHP, 3070KB, 下载0次)

http://www.pudn.com/Download/item/id/1450211846586503.html

[自然语言处理] qidian_spider

qidian_spider,本次的爬虫项目实现了数据爬取、解析、储存、分析和可视化等需求。本项目整体使用了Python语言,爬取的目标是起点中文网,目的是获得其畅销榜单的100部小说的相关信息(排行,书名,作者,书籍类型,简介,最新章节,最近更新时间和书籍链接)...
qidian_ Spider, this crawler project has achieved the requirements of data crawling, parsing, storage, analysis, and visualization. This project uses Python language as a whole, and the goal of crawling is to access information about the 100 novels on its best-selling list (ranking, book title, author, book type, introduction, latest chapters, latest update time, and book links) (2023-02-25, HTML, 6301KB, 下载0次)

http://www.pudn.com/Download/item/id/1677268358549266.html

[自然语言处理] scrapy_poetry

scrapy_poetry,本项目使用scrapy对 古诗文网 进行爬虫,获取不同分类(爱情、七夕等)的宋词的:词牌、作者、正文、注释、创作背景。
scrapy_ Poetry, this project uses Scrapy to crawl the ancient poetry website to obtain different categories (love, Qixi, etc.) of Song poetry: word board, author, text, annotation, and creation background. (2022-06-05, Python, 2986KB, 下载0次)

http://www.pudn.com/Download/item/id/1654401029827152.html

[自然语言处理] BERT_news_classfication

BERT_news_classfication,通过python爬虫获取人民网、新浪等网站新闻作为训练集,基于BERT构建新闻文本分类模型,并结合node.js + vue完成了一个可视化界面。
BERT_ news_ Classfixation, which uses Python crawlers to obtain news from websites such as People s Daily Online and Sina as a training set, constructs a news text classification model based on BERT, and completes a visualization interface by combining node.js and Vue. (2022-03-14, Python, 40208KB, 下载0次)

http://www.pudn.com/Download/item/id/1647191078911423.html

[自然语言处理] Reading-and-comprehense-redis-cluster

Reading-and-comprehense-redis-cluster,分布式NOSQL redis源码阅读中文分析注释,带详尽注释以及相关流程调用注释,提出改造点,redis cluster集群功能、节点扩容、槽位迁移、failover故障切换、一致性选举完整分析,对理解redis源码很有帮助,解决了s...
Reading and comparison redis cluster, distributed NOSQL redis source code reading Chinese analysis annotations, with detailed annotations and related process call annotations, proposed improvement points, redis cluster cluster function, node expansion, slot migration, failover failover, and consistency election complete analysis, which is very helpful for understanding redis source code and solving s (2021-08-22, C, 43656KB, 下载0次)

http://www.pudn.com/Download/item/id/1629634697584671.html

[自然语言处理] StarTools

StarTools,开源微信小程序源码 明星工具箱,不需要任何服务器和云服务资源,拥有海量工具,功能不断增加,支持流量主,欢迎star. 功能清单:网速测试,计算器,血型计算,关系计算器,一生时间,尺码计算,色盲测试,房贷计算器,尺子 ,量角器,BMI计算
StarTools, an open-source WeChat mini program source code celebrity toolbox, does not require any server or cloud service resources. It has a massive number of tools, continuously increasing functions, and supports traffic control. Welcome to star Function list: Internet speed test, calculator, blood type calculation, relationship calculator, lifetime, size calculation, color blindness test, mortgage calculator, ruler, protractor, BMI calculation (2022-10-13, JavaScript, 857KB, 下载0次)

http://www.pudn.com/Download/item/id/1665616500498551.html

[自然语言处理] Relation-extraction-using-Semantic-Web

使用语义网进行关系提取,我们将通过以下方式处理来自网络的非结构化数据(通过抓取一些示例网站获得):使用Apache SolR...
We will process unstructured data from web (obtained by crawling some sample websites) by maybe: having a Apache SolR installation locally and manually feeding it web pages. We can use Stanford NLP API or Metamind API to extract semantics from the unstructured text. After we extract some semantics, we can construct a structured data format, (2015-12-07, Java, 4368KB, 下载0次)

http://www.pudn.com/Download/item/id/1449418901934897.html
总计:359