构建一个多源(公众号、RSS)、干净、个性化的阅读环境

Overview

2C

构建一个多源(公众号、RSS)、干净、个性化的阅读环境

作为一名微信公众号的重度用户,公众号一直被我设为汲取知识的地方。随着使用程度的增加,相信大家或多或少会有一个比较头疼的问题——广告问题

假设你关注的公众号有十来个,若一个公众号两周接一次广告,理论上你会面临二十多次广告,实际上会更多,运气不好的话一天刷下来都是广告也不一定。若你关注了二三十个公众号,那很难避免现阶段公众号环境的广告轰炸。

更可恶的是,大部分的广告,无不是贩卖焦虑,营造消极气氛,实在无法忍受且严重影响我的心情。但有些公众号写的文章又确实不错,那怎么做可以不看广告只看文章呢?如果你在公众号阅读体验下深切感受到对于广告的无奈,那么这个项目就是你需要的。

这就是本项目的产生的原因,构建一个多源(公众号、RSS)、干净、个性化的阅读环境

PS: 这里声明一点,看广告是对作者的支持,这样一定程度上可以促进作者更好地产出。但我看到喜欢的会直接打赏支持,所以搭便车的言论在我这里站不住脚,谢谢。

实现

我的思路很简单,大概流程如下:

2c_process

简单解释一下:

  • 采集器:监控各自关注的公众号或者博客源,最终构建Feed流作为输入源;
  • 分类器(广告):基于历史广告数据,利用机器学习实现一个广告分类器(可自定义规则),然后给每篇文章自动打上标签再持久化到MongoDB
  • 分发器:依靠接口层进行数据请求&响应,为使用者提供个性化配置,然后根据配置自动进行分发,将干净的文章流向微信、钉钉、TG甚至自建网站都行。

这样做就实现了干净阅读环境的构建,衍生一下,还可以实现个人知识库的构建,可以做诸如标签管理、图谱构建等,这些都可以在接口层进行实现。

实现详情可参考文章[打造一个干净且个性化的公众号阅读环境]

使用

本项目使用 pipenv 进行项目管理, 安装使用过程如下:

# 确保有Python3.6+环境
git clone https://github.com/howie6879/2c.git
cd 2c

# 创建基础环境
pipenv install --python={your_python3.6+_path}  --skip-lock --dev
# 配置.env 具体查看 doc/00.环境变量.md
# 启动
pipenv run dev

使用前建议阅读文档:

帮助

为了提升模型的识别准确率,我希望大家能尽力贡献一些广告样本,请看样本文件:.files/datasets/ads.csv,我设定格式如下:

title url
广告文章标题 广告文章连接

来个实例:

ads_demo

一般广告会重复在多个公众号投放,填写的时候麻烦查一下是否存在此条记录,真的真的希望大家能一起合力贡献,亲,来个PR贡献你的力量吧!

致谢

非常感谢以下项目:

感谢以下开发者的贡献(排名不分先后):

关于

欢迎与我交流(关注入群):

img
Comments
  • 使用 docker 一键安装,运行报错 ERROR Liuli 执行失败!'doc_source'

    使用 docker 一键安装,运行报错 ERROR Liuli 执行失败!'doc_source'

    运行日志如下,请问这是啥问题。

    [2022:02:18 10:51:54] INFO  Liuli Schedule(v0.2.1) task([email protected]_team) started successfully :)
    
    [2022:02:18 10:51:54] INFO  Liuli Task([email protected]_team) schedule time:
    
     00:10
    
     12:10
    
     21:10
    
    [2022:02:18 10:51:54] ERROR Liuli 执行失败!'doc_source'
    opened by GuoZhaoHui628 24
  • 带有空格的公众号采集总是失败

    带有空格的公众号采集总是失败

    [2022:05:27 08:11:47] INFO Request <GET: https://weixin.sogou.com/weixin?type=1&query=丁爸20%情报分析师的工具箱&ie=utf8&s_from=input&sug=n&sug_type=> liuli_schedule | [2022:05:27 08:11:48] ERROR SGWechatSpider <Item: Failed to get target_item's value from html.> liuli_schedule | Traceback (most recent call last): liuli_schedule | File "/root/.local/share/virtualenvs/code-nY5aaahP/lib/python3.9/site-packages/ruia/spider.py", line 197, in _process_async_callback liuli_schedule | async for callback_result in callback_results: liuli_schedule | File "/data/code/src/collector/wechat/sg_ruia_start.py", line 58, in parse liuli_schedule | async for item in SGWechatItem.get_items(html=html): liuli_schedule | File "/root/.local/share/virtualenvs/code-nY5aaahP/lib/python3.9/site-packages/ruia/item.py", line 127, in get_items liuli_schedule | raise ValueError(value_error_info) liuli_schedule | ValueError: <Item: Failed to get target_item's value from html.>

    bug 
    opened by hackdoors 7
  • liuli_schedule exited with code 0

    liuli_schedule exited with code 0

    根据https://mp.weixin.qq.com/s/rxoq97YodwtAdTqKntuwMA的提示进行安装。

    实际文件和代码如下:

    pro.env文件的内容:

    PYTHONPATH=${PYTHONPATH}:${PWD}
    LL_M_USER="liuli"
    LL_M_PASS="liuli"
    LL_M_HOST="liuli_mongodb"
    LL_M_PORT="27017"
    LL_M_DB="admin"
    LL_M_OP_DB="liuli"
    LL_FLASK_DEBUG=0
    LL_HOST="0.0.0.0"
    LL_HTTP_PORT=8765
    LL_WORKERS=1
    # 上面这么多配置不用改,下面的才需要各自配置
    # 请填写你的实际IP
    LL_DOMAIN="http://172.17.0.1:8765"
    # 请填写微信分发配置
    LL_WECOM_ID="自定义"
    LL_WECOM_AGENT_ID="自定义"
    LL_WECOM_SECRET="自定义"
    

    default.json的内容如下:

    {
        "name": "default",
        "author": "liuli_team",
        "collector": {
            "wechat_sougou": {
                "wechat_list": [
                    "老胡的储物柜"
                ],
                "delta_time": 5,
                "spider_type": "playwright"
            }
        },
        "processor": {
            "before_collect": [],
            "after_collect": [{
                "func": "ad_marker",
                "cos_value": 0.6
            }, {
                "func": "to_rss",
                "link_source": "github"
            }]
        },
        "sender": {
            "sender_list": ["wecom"],
            "query_days": 7,
            "delta_time": 3
        },
        "backup": {
            "backup_list": ["mongodb"],
            "query_days": 7,
            "delta_time": 3,
            "init_config": {},
            "after_get_content": [{
                "func": "str_replace",
                "before_str": "data-src=\"",
                "after_str": "src=\"https://images.weserv.nl/?url="
            }]
        },
        "schedule": {
            "period_list": [
                "00:10",
                "12:10",
                "21:10"
            ]
        }
    }
    

    docker-compose.yml文件的内容如下:

    version: "3"
    services:
      liuli_api:
        image: liuliio/api:v0.1.3
        restart: always
        container_name: liuli_api
        ports:
          - "8765:8765"
        volumes:
          - ./pro.env:/data/code/pro.env
        depends_on:
          - liuli_mongodb
        networks:
          - liuli-network
      liuli_schedule:
        image: liuliio/schedule:v0.2.4
        restart: always
        container_name: liuli_schedule
        volumes:
          - ./pro.env:/data/code/pro.env
          - ./liuli_config:/data/code/liuli_config
        depends_on:
          - liuli_mongodb
        networks:
          - liuli-network
      liuli_mongodb:
        image: mongo:3.6
        restart: always
        container_name: liuli_mongodb
        environment:
          - MONGO_INITDB_ROOT_USERNAME=liuli
          - MONGO_INITDB_ROOT_PASSWORD=liuli
        ports:
          - "27027:27017"
        volumes:
          - ./mongodb_data:/data/db
        command: mongod
        networks:
          - liuli-network
    
    networks:
      liuli-network:
        driver: bridge
    

    报错内容如下:

    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule exited with code 0
    

    我感觉是python路径的问题。我的python路径是:

    which python3 # /usr/bin/python3
    

    我的VPS中没有${PYTHONPATH}这个系统变量:

    echo ${PYTHONPATH} # NULL
    

    请问大佬,我应该如何改正?

    opened by huangwb8 7
  • Liuli 项目需要一个 logo

    Liuli 项目需要一个 logo

    项目名称来源,群友 @ Sngxpro 提供:

    代号:琉璃(Liuli)
    
    英文:RuriElysion
     or:RuriWorld
    
    slogan:琉璃开净界,薜荔启禅关 ---梅尧臣《缑山子晋祠 会善寺》
    
    寓意:构建一方净土如东方琉璃净世界。《药师经》云:「然彼佛土,一向清净,无有女人,亦无恶趣,及苦音声。」
    
    help wanted 
    opened by howie6879 7
  • 希望能在RSS订阅里面包含~原始文章链接

    希望能在RSS订阅里面包含~原始文章链接

    image

    目前打算写一个脚本,通过全文获取API来去获取全文,在根据自定义的格式寄给我的gmail...这样除了newsletter之外,一些RSS订阅和微信公众号都可以直接在spark阅读...

    然而我找到的全文获取的付费api要求有些高,RSS里面的link格式不行,就算经过decodeURIComponent函数转换也还是格式不正确。

    如果RSS订阅有原始网页的连接,就可以抓取用原始链接来获取全文而不会出错!

    希望作者可以给与支持!感谢:)

    opened by CenBoMin 5
  • 希望增加功能,取消生成的RSS中的updated的变动

    希望增加功能,取消生成的RSS中的updated的变动

    截取一部分生成的RSS信息如下,此处的 updated 日期,为liuli在周期性运行的过程中更新时的时间,即使对于一条很久以前的RSS信息,它的 updated 也会被更新到当前时间。

    <entry>
        <id>liuli_wechat - 谷歌开发者 - 社区说|TensorFlow 在工业视觉中的落地</id>
        <title>社区说|TensorFlow 在工业视觉中的落地 </title>
        <updated>2022-05-28T13:17:35.903720+00:00</updated>
        <author>
            <name>liuli_wechat - GDG</name>
        </author>
        <content/>
        <link href="https://ddns.ysmox.com:8766/backup/liuli_wechat/谷歌开发者/%E7%A4%BE%E5%8C%BA%E8%AF%B4%EF%BD%9CTensorFlow%20%E5%9C%A8%E5%B7%A5%E4%B8%9A%E8%A7%86%E8%A7%89%E4%B8%AD%E7%9A%84%E8%90%BD%E5%9C%B0" rel="alternate"/>
        <published>2022-05-25T17:30:46+08:00</published>
    </entry>
    

    这样会引起一些问题,在某些RSS订阅器上(如Tiny Tiny RSS),其时间轴上是根据 updated 来排序,而并非 published,如此一来,无法有效地区分当前的RSS哪些内容是最近生成的,哪些又是以前生成过的。

    所以希望保留 updated 的时间不变(如第一次存到mongodb中时,记录当前时间;若周期性更新时则不改变其值)或者与 published 保持一致。

    最后,希望我已经清楚地表达了我的问题和请求,谢谢!

    enhancement 
    opened by YsMox 3
  • 爬取微信公众号的Demo执行失败

    爬取微信公众号的Demo执行失败

    参考的https://mp.weixin.qq.com/s/rxoq97YodwtAdTqKntuwMA 刚起了demo试着爬一下微信公众号的内容,但是日志里显示执行失败了。

    Loading .env environment variables...
    [2022:05:09 10:55:45] INFO  Liuli Schedule(v0.2.4) task([email protected]_team) started successfully :)
    [2022:05:09 10:55:45] INFO  Liuli Task([email protected]_team) schedule time:
     00:10
     12:10
     21:10
    [2022:05:09 10:55:45] ERROR Liuli 执行失败!'doc_source'
    

    文章里给你docker compose配置文件里使用的liuli schedule镜像版本是不带playwright的,我看文章里提供的default的json里描述的使用playwright爬取微信内容,尝试着更改为了带playwright的版本,也显示执行失败。

    opened by Colin-XKL 3
  • 抓取公众号文章时,时间格式清洗出错

    抓取公众号文章时,时间格式清洗出错

    测试脚本如下:

    from src.collector.wechat_feddd.start import WeiXinSpider
    WeiXinSpider.request_config = {"RETRIES": 3, "DELAY": 5, "TIMEOUT": 20}
    WeiXinSpider.start_urls = ['https://mp.weixin.qq.com/s/OrCRVCZ8cGOLRf5p5avHOg']
    WeiXinSpider.start()
    

    错误原因: 数据清洗时,期望的数据格式是 2022-03-21 20:59,但实际抓取回来的数据是 2022-03-22 20:37:12,导致 clean_doc_ts函数报错。如下图 image

    opened by showthesunli 3
  • 动态获取企业微信分发部门ID参数

    动态获取企业微信分发部门ID参数

    新增两个配置项:

    # 企业微信分发用户(填写用户帐号,不区分大小写),多个用户用;分割
    CC_WECOM_TO_USER=""
    # 企业微信分发部门(填写部门名称),多个部门用;分割
    CC_WECOM_PARTY=""
    

    如两项都不填写,默认向当前应用所有部门的所有用户分发,如用户填写,则按用户填写的配置进行分发

    opened by zyd16888 1
  • 0.24版本参照教程无法启动schedule

    0.24版本参照教程无法启动schedule

    如果按照教程手动添加pro.env文件,无法启动docker,但是如果不手动添加文件,启动docker的话会自动创建pro.env文件夹,然后docker会循环输出如下日志 Loading .env environment variables... Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py Warning: file PIPENV_DOTENV_LOCATION=./pro.env does not exist!! Not loading environment variables. Process Process-1: Traceback (most recent call last): File "/usr/local/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/local/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/data/code/src/liuli_schedule.py", line 84, in run_liuli_schedule ll_config = json.load(load_f) File "/usr/local/lib/python3.9/json/init.py", line 293, in load return loads(fp.read(), File "/usr/local/lib/python3.9/json/init.py", line 346, in loads return _default_decoder.decode(s) File "/usr/local/lib/python3.9/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/local/lib/python3.9/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None

    opened by zhyueyueniao 1
  • 分发器支持

    分发器支持

    目前计划支持将文章输出到如下终端:

    • [x] 钉钉,比较开放,方便介入,推荐 @howie6879
    • [x] 微信,可考虑企业微信或Hook @howie6879
    • [x] RSS生成器模块 @howie6879
    • [x] TG @123seven
    • [x] Bark @LeslieLeung
    • [ ] 飞书

    更多分发终端需求大家可在评论区请求支持

    enhancement help wanted 
    opened by howie6879 14
Releases(v0.2.0)
  • v0.2.0(Feb 10, 2022)

    v0.2.0 2022-02-11

    liuli v0.2.0 👏 成功发布,看板计划见这里,相关特性和功能提升见下方描述。

    提升:

    • 部分代码重构,重命名为 liuli
    • 提升部署效率,支持docker-compose #17
    • 项目容量从100m缩小到3m(移除模型)

    修复:

    • 分发器:企业微信分发部门ID参数不定 #16 @zyd16888
    • 修复含有特殊字符密码链接失败 #35 @gclm

    特性:

    Source code(tar.gz)
    Source code(zip)
Owner
howie.hu
奇文共欣赏,疑义相与析
howie.hu
Signature remover is a NLP based solution which removes email signatures from the rest of the text.

Signature Remover Signature remover is a NLP based solution which removes email signatures from the rest of the text. It helps to enchance data conten

Forges Alterway 8 Jan 06, 2023
Build Text Rerankers with Deep Language Models

Reranker is a lightweight, effective and efficient package for training and deploying deep languge model reranker in information retrieval (IR), question answering (QA) and many other natural languag

Luyu Gao 140 Dec 06, 2022
Google's Meena transformer chatbot implementation

Here's my attempt at recreating Meena, a state of the art chatbot developed by Google Research and described in the paper Towards a Human-like Open-Domain Chatbot.

Francesco Pham 94 Dec 25, 2022
Search msDS-AllowedToActOnBehalfOfOtherIdentity

前言 现在进行RBCD的攻击手段主要是搜索mS-DS-CreatorSID,如果机器的创建者是我们可控的话,那就可以修改对应机器的msDS-AllowedToActOnBehalfOfOtherIdentity,利用工具SharpAllowedToAct-Modify 那我们索性也试试搜索所有计算机

Jumbo 26 Dec 05, 2022
A Python package implementing a new model for text classification with visualization tools for Explainable AI :octocat:

A Python package implementing a new model for text classification with visualization tools for Explainable AI 🍣 Online live demos: http://tworld.io/s

Sergio Burdisso 285 Jan 02, 2023
Japanese synonym library

chikkarpy chikkarpyはchikkarのPython版です。 chikkarpy is a Python version of chikkar. chikkarpy は Sudachi 同義語辞書を利用し、SudachiPyの出力に同義語展開を追加するために開発されたライブラリです。

Works Applications 48 Dec 14, 2022
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

Damian Panek 176 Nov 28, 2022
Code for Findings at EMNLP 2021 paper: "Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning"

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning This repo is for Findings at EMNLP 2021 paper: Learn Cont

INK Lab @ USC 6 Sep 02, 2022
this repository has datasets containing information of Uber pickups in NYC from April 2014 to September 2014 and January to June 2015. data Analysis , virtualization and some insights are gathered here

uber-pickups-analysis Data Source: https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city Information about data set The dataset contain

1 Nov 02, 2021
Simple and efficient RevNet-Library with DeepSpeed support

RevLib Simple and efficient RevNet-Library with DeepSpeed support Features Half the constant memory usage and faster than RevNet libraries Less memory

Lucas Nestler 112 Dec 05, 2022
An assignment on creating a minimalist neural network toolkit for CS11-747

minnn by Graham Neubig, Zhisong Zhang, and Divyansh Kaushik This is an exercise in developing a minimalist neural network toolkit for NLP, part of Car

Graham Neubig 63 Dec 29, 2022
Club chatbot

Chatbot Club chatbot Instructions to get the Chatterbot working Step 1. First make sure you are using a version of Python 3 or newer. To check your ve

5 Mar 07, 2022
The tool to make NLP datasets ready to use

chazutsu photo from Kaikado, traditional Japanese chazutsu maker chazutsu is the dataset downloader for NLP. import chazutsu r = chazutsu.data

chakki 243 Dec 29, 2022
A library for Multilingual Unsupervised or Supervised word Embeddings

MUSE: Multilingual Unsupervised and Supervised Embeddings MUSE is a Python library for multilingual word embeddings, whose goal is to provide the comm

Facebook Research 3k Jan 06, 2023
Script to generate VAD dataset used in Asteroid recipe

About the dataset LibriVAD is an open source dataset for voice activity detection in noisy environments. It is derived from LibriSpeech signals (clean

11 Sep 15, 2022
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 881 Jan 03, 2023
Accurately generate all possible forms of an English word e.g "election" --> "elect", "electoral", "electorate" etc.

Accurately generate all possible forms of an English word Word forms can accurately generate all possible forms of an English word. It can conjugate v

Dibya Chakravorty 570 Dec 31, 2022
Open solution to the Toxic Comment Classification Challenge

Starter code: Kaggle Toxic Comment Classification Challenge More competitions 🎇 Check collection of public projects 🎁 , where you can find multiple

minerva.ml 153 Jun 22, 2022
多语言降噪预训练模型MBart的中文生成任务

mbart-chinese 基于mbart-large-cc25 的中文生成任务 Input source input: text + /s + lang_code target input: lang_code + text + /s Usage token_ids_mapping.jso

11 Sep 19, 2022