purge history

This commit is contained in:
GitHub Actions 2026-01-07 04:36:44 +00:00
commit b27907f58e
630 changed files with 296025 additions and 0 deletions

34
.github/DISCLAIMER.md vendored Executable file
View File

@ -0,0 +1,34 @@
# **免责声明**
本项目(以下称“本仓库”)提供的代码、接口及文档仅供技术研究、学习交流及合法用途使用,严禁用于任何商业或非法目的。使用者需自行承担因使用本项目产生的全部责任。
## 1. **数据来源说明**
- 本仓库中涉及的接口、数据或资源均**来自互联网公开内容**,可能通过技术手段(如网络爬虫、公开 API 等)收集整理。
- 所有数据内容均**由第三方提供**,本仓库**不存储、不修改、不控制**任何音视频、图文或其他形式的内容,仅提供技术接口或工具。
- 项目维护者**无法保证**接口的稳定性、内容的合法性、准确性或时效性,亦不对第三方资源的版权、内容质量负责。
## 2. **用户责任**
- 使用者需遵守所在国家/地区的法律法规,禁止利用本项目从事以下行为:
- 传播盗版、色情、暴力或其他违法内容。
- 侵犯他人知识产权、隐私权等合法权益。
- 对第三方服务器发起恶意攻击或干扰正常服务。
- 使用者应自行判断并承担因访问或使用第三方资源导致的**一切风险**,维护者不承担任何责任。
## 3. **免责条款**
- 本仓库维护者及贡献者**不对以下情况负责**
- 因使用本项目导致的设备损坏、数据丢失或法律纠纷。
- 第三方接口失效、内容下架或服务终止造成的使用问题。
- 用户滥用本项目功能产生的直接或间接后果。
- 本仓库提供的代码及文档均以“**原样**”提供,不承诺提供任何形式的担保。
## 4. **第三方内容与版权**
- 若您认为本项目引用的资源侵犯了您的权益,请通过 [联系方式] 提交有效证明,我们将在核实后移除相关内容。
- 本仓库遵循**技术中立原则**,不参与任何资源的创作、分发或盈利,请通过合法渠道支持正版内容。
## 5. **使用条款**
- 下载、复制、修改或使用本仓库内容即视为**已阅读并同意本声明**。
- 本仓库维护者保留随时修改或更新此免责声明的权利,恕不另行通知。
---
**联系方式**<a href="mailto:clun@duck.com">clun@duck.com</a>(如需内容移除或合作,请注明事由)

15
.github/FUNDING.yml vendored Executable file
View File

@ -0,0 +1,15 @@
# These are supported funding model platforms
github: [cluntop] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
# patreon: # Replace with a single Patreon username
# open_collective: # Replace with a single Open Collective username
# ko_fi: # Replace with a single Ko-fi username
# tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
# community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
# liberapay: # Replace with a single Liberapay username
# issuehunt: # Replace with a single IssueHunt username
# lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
# polar: # Replace with a single Polar username
# buy_me_a_coffee: # Replace with a single Buy Me a Coffee username
# thanks_dev: # Replace with a single thanks.dev username
# custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

79
.github/README.md vendored Executable file
View File

@ -0,0 +1,79 @@
###### ⚠️ **重要提示**:使用前请务必阅读 [免责声明](DISCLAIMER.md)
#### 群组里有 Github 通知 [频道](https://t.me/clun_tz) / [群组](https://t.me/clun_top)
#### 欢迎 Star 及 PR。
<details>
<summary>贡献指南</summary>
###### 欢迎贡献代码!请随时提交 Pull Request。
###### Fork 仓库 cluntop/tvbox
> 创建功能分支(`git checkout -b cluntop/tvbox`)
> 提交更改(`git commit -m '添加某些说明'`)
> 推送到分支(`git push origin cluntop/tvbox`)
> 打开 Pull Request
###### WebDav TVbox 接口配置
```
例如 https://pan.clun.top/dav
主机 pan.clun.top
路径 /dav
协议 SSL
接口 443
账号 tvbox
密码 tvbox
```
</details>
<details>
<summary>TVBox 自用接口</summary>
###### TVBox 自用 接口
```
https://clun.top/box.json
```
###### TVBox PG 接口
```
https://clun.top/jsm.json
```
###### TVBox 18+ 接口
```
https://clun.top/fun.json
```
###### TVBox 张佬 接口
```
https://clun.top/js/aa.json
```
</details>
<details>
<summary>TVBox APP 下载</summary>
###### FongMi leanback [v7a](https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/fongmi/apk/release/leanback-armeabi_v7a.apk) [v8a](https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/fongmi/apk/release/leanback-arm64_v8a.apk)
###### okjack leanback [v7a](https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/okjack/apk/release/leanback-armeabi_v7a.apk) [v8a](https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/okjack/apk/release/leanback-arm64_v8a.apk)
###### FongMi mobile [v7a](https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/fongmi/apk/release/mobile-armeabi_v7a.apk) [v8a](https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/fongmi/apk/release/mobile-arm64_v8a.apk)
###### okjack mobile [v7a](https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/okjack/apk/release/mobile-armeabi_v7a.apk) [v8a](https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/okjack/apk/release/mobile-arm64_v8a.apk)
> mobile = 手机版
> leanback = 电视版
> arm64_v8a = 64 位元
> armeabi_v7a = 32 位元
</details>

33
.github/dependabot.yml vendored Executable file
View File

@ -0,0 +1,33 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "github-actions" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "daily"
day: "monday"
timezone: "Asia/Shanghai"
time: "09:00"
groups:
all-actions:
patterns:
- "*"
open-pull-requests-limit: 10
commit-message:
prefix: "ci"
prefix-development: "ci"
include: "scope"
labels:
- "dependencies"
- "documentation"
pull-request-branch-name:
separator: "-"

138
.github/ijk.txt vendored Executable file
View File

@ -0,0 +1,138 @@
"ijk": [{
"group": "软解码",
"options": [{
"category": 4,
"name": "opensles",
"value": "0"
}, {
"category": 4,
"name": "overlay-format",
"value": "842225234"
}, {
"category": 4,
"name": "framedrop",
"value": "1"
}, {
"category": 4,
"name": "soundtouch",
"value": "1"
}, {
"category": 4,
"name": "start-on-prepared",
"value": "1"
}, {
"category": 1,
"name": "http-detect-range-support",
"value": "0"
}, {
"category": 1,
"name": "fflags",
"value": "fastseek"
}, {
"category": 2,
"name": "skip_loop_filter",
"value": "48"
}, {
"category": 4,
"name": "reconnect",
"value": "1"
}, {
"category": 4,
"name": "max-buffer-size",
"value": "8388608"
}, {
"category": 4,
"name": "enable-accurate-seek",
"value": "0"
}, {
"category": 4,
"name": "mediacodec",
"value": "0"
}, {
"category": 4,
"name": "mediacodec-auto-rotate",
"value": "0"
}, {
"category": 4,
"name": "mediacodec-handle-resolution-change",
"value": "0"
}, {
"category": 4,
"name": "mediacodec-hevc",
"value": "0"
}, {
"category": 1,
"name": "dns_cache_timeout",
"value": "600000000"
}
]
}, {
"group": "硬解码",
"options": [{
"category": 4,
"name": "opensles",
"value": "0"
}, {
"category": 4,
"name": "overlay-format",
"value": "842225234"
}, {
"category": 4,
"name": "framedrop",
"value": "1"
}, {
"category": 4,
"name": "soundtouch",
"value": "1"
}, {
"category": 4,
"name": "start-on-prepared",
"value": "1"
}, {
"category": 1,
"name": "http-detect-range-support",
"value": "0"
}, {
"category": 1,
"name": "fflags",
"value": "fastseek"
}, {
"category": 2,
"name": "skip_loop_filter",
"value": "48"
}, {
"category": 4,
"name": "reconnect",
"value": "1"
}, {
"category": 4,
"name": "max-buffer-size",
"value": "12582912"
}, {
"category": 4,
"name": "enable-accurate-seek",
"value": "0"
}, {
"category": 4,
"name": "mediacodec",
"value": "1"
}, {
"category": 4,
"name": "mediacodec-auto-rotate",
"value": "1"
}, {
"category": 4,
"name": "mediacodec-handle-resolution-change",
"value": "1"
}, {
"category": 4,
"name": "mediacodec-hevc",
"value": "1"
}, {
"category": 1,
"name": "dns_cache_timeout",
"value": "600000000"
}
]
}
],

6
.github/requirements.txt vendored Executable file
View File

@ -0,0 +1,6 @@
GitPython==3.1.43
Telethon==1.37.0
requests==2.32.4
typing_extensions==4.12.2
demoji==1.1.0
tqdm==4.66.5

0
.github/test.json vendored Executable file
View File

66
.github/test.json.txt vendored Executable file
View File

@ -0,0 +1,66 @@
{
"key": "豆瓣",
"name": "豆瓣",
"type": 3,
"api": "csp_Douban",
"searchable": 0,
"changeable": 1,
"indexs": 1,
"ext": "./lib/tokenm.json$$$./lib/douban.json"
},
{
"key": "嗷呜弹幕",
"name": "弹幕",
"type": 3,
"jar": "./jar/spider-woof.jar",
"api": "csp_GoConfig",
"indexs": 1,
"searchable": 0,
"filterable": 0,
"quickSearch": 0,
"changeable": 0,
"ext": "./lib"
},
{
"key": "AList",
"name": "Alist",
"type": 3,
"api": "csp_AList",
"searchable": 1,
"filterable": 1,
"changeable": 1,
"timeout": 60,
"vodPic": "./img/file.jpg",
"ext": "./js/alist.json"
},
{
"key": "BiliBili",
"name": "Bili_MTV",
"type": 3,
"api": "csp_Bili",
"style": {
"type": "rect",
"ratio": 1.597
},
"searchable": 1,
"quickSearch": 0,
"changeable": 0,
"timeout": 60,
"ext": {
"json": "./js/mtv.json",
"cookie": ""
}
},
{
"key": "采集集合",
"name": "采集集合",
"type": 1,
"api": "http://127.0.0.1:1988/lb?lb=3",
"jar": "./jar/yt-aa.jar",
"searchable": 1,
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
}

45
.github/test.md vendored Executable file
View File

@ -0,0 +1,45 @@
//searchable:搜索开关 0:关闭1:启用
//filterable:筛选开关 0:关闭1:启用
//changeable:换源开关 0:关闭1:启用
//quickSearch:快速开关 0:关闭1:启用
//playerType:播放器类型 1:IJK2:EXO
//采集接口类型 0:xml1:json
//parses:解析类型 0嗅探自带播放器1解析返回直链
小苹果影视
https://framagit.org/168/myys/-/raw/main/bb/jar/fenghuang.jar
"categories": ["国产动漫", "日韩动漫", "国产剧"]
影视点播源
https://json.doube.eu.org/t3.php
https://www.xn--sss604efuw.com/jm/
分类url .cateId=分类 .class=类型 .area=地区 .lang=语言 .year=年份 .by=排序 .catePg=类别?
https://github.com/hjdhnx/drpy-node
https://github.com/fangkuia/XPTV
https://github.com/fafa002/yf2025
https://github.com/fanmingming/live
https://github.com/yoursmile66/TVBox
https://github.com/977567941/Kowaryou
https://github.com/xyq254245/xyqonlinerule
https://github.com/towerstreet/IPTV-TVBOX
https://github.com/fastbuild7099/fastbuild7099
-
https://github.com/xMydev/TVBoxRuleMaster
https://gitee.com/PizazzXS/another-d
https://raw.githubusercontent.com/ljlfct01/ljlfct01.github.io/main/%E8%85%BE%E4%BA%914k.js
https://gitee.com/yuyu10588/tt/raw/t/t/drpy2.min.js
https://www.gitlink.org.cn/LLwj/dmbj
http://www.kgj.cc/?post=1137

59
.github/workflows/clear.yml vendored Executable file
View File

@ -0,0 +1,59 @@
name: Clean up the garbage
on:
schedule:
- cron: '0 4 7 * *'
workflow_dispatch:
permissions:
contents: write
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout target repo
uses: actions/checkout@v6
with:
repository: cluntop/tvbox
token: ${{ secrets.GIT_TOKEN }}
fetch-depth: 1
- name: Install git-filter-repo
run: |
sudo apt-get update
sudo apt-get install -y python3-pip
pip3 install git-filter-repo
- name: Configure Git
run: |
git config --global user.name "GitHub Actions"
git config --global user.email "actions@github.com"
- name: Compressing
run: |
git prune-packed
git reflog expire --expire=now --all
git gc --prune=now --aggressive
- name: Force Push Changes
run: |
git push origin --force --all
git push origin --force --tags
- name: Reset History
run: |
git checkout --orphan latest_branch
git add -A
git commit -m "purge history"
git branch -D main
git branch -m main
- name: Force Push to Remote
run: |
git push -f origin main

16
.github/workflows/greetings.yml vendored Normal file
View File

@ -0,0 +1,16 @@
name: Greetings
on: [pull_request_target, issues]
jobs:
greeting:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/first-interaction@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-message: "Message that will be displayed on users' first issue"
pr-message: "Message that will be displayed on users' first pull request"

99
.github/workflows/static.yml vendored Executable file
View File

@ -0,0 +1,99 @@
# Simple workflow for deploying static content to GitHub Pages
name: Update Deploy to Pages
on:
# Runs on pushes targeting the default branch
push:
branches: ["main"]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
env:
DEPLOY_REF: ${{ github.event.inputs.deploy_branch || github.ref }}
UPLOAD_PATH: "."
TZ: Asia/Shanghai
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: write
pages: write
id-token: write
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: true
jobs:
# Single deploy job since we're just deploying
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
with:
repository: cluntop/tvbox
token: ${{ secrets.GIT_TOKEN }}
fetch-depth: 1
lfs: true
- name: 🔄 Auto Commit Timestamp
if: github.event_name == 'schedule'
run: |
cd ${{ env.UPLOAD_PATH }}
echo "Last deployment run at: $(date)" > last_run.txt
git config --global user.name "github-actions[bot]"
git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add last_run.txt
if ! git diff-index --quiet HEAD; then
git commit -m "🤖 Auto-update: Scheduled deployment [skip ci]"
git push
echo "::notice::✅ 已自动提交更新时间戳"
else
echo "::notice::⚠️ 无文件变更,跳过提交"
fi
- name: Verify & Generate CNAME
run: |
echo "clun.top" > ${{ env.UPLOAD_PATH }}/CNAME
FILE_COUNT=$(find ${{ env.UPLOAD_PATH }} -type f | wc -l)
SIZE=$(du -sh ${{ env.UPLOAD_PATH }} | cut -f1)
echo "### 📦 部署准备就绪" >> $GITHUB_STEP_SUMMARY
echo "- **分支**: \`${{ env.DEPLOY_REF }}\`" >> $GITHUB_STEP_SUMMARY
echo "- **文件数量**: $FILE_COUNT" >> $GITHUB_STEP_SUMMARY
echo "- **总大小**: $SIZE" >> $GITHUB_STEP_SUMMARY
if [ "$FILE_COUNT" -eq 0 ]; then
echo "::error::❌ 目录下没有文件,终止部署!"
exit 1
fi
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Upload artifact
uses: actions/upload-pages-artifact@v4
with:
# Upload entire repository
path: ${{ env.UPLOAD_PATH }}
retention-days: 1
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
- name: Deployment Success Info
if: success()
run: |
echo "::notice title=部署成功::✅ 您的站点已上线!"
echo "### 🚀 部署成功!" >> $GITHUB_STEP_SUMMARY
echo "访问地址: [${{ steps.deployment.outputs.page_url }}](${{ steps.deployment.outputs.page_url }})" >> $GITHUB_STEP_SUMMARY

38
.github/workflows/summary.yml vendored Normal file
View File

@ -0,0 +1,38 @@
name: Summarize new issues
on:
issues:
types: [opened]
jobs:
summary:
runs-on: ubuntu-latest
permissions:
issues: write
models: read
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
repository: cluntop/tvbox
token: ${{ secrets.GIT_TOKEN }}
fetch-depth: 1
- name: Run AI inference
id: inference
uses: actions/ai-inference@v2
with:
prompt: |
Summarize the following GitHub issue in one paragraph:
Title: ${{ github.event.issue.title }}
Body: ${{ github.event.issue.body }}
- name: Comment with AI summary
run: |
gh issue comment $ISSUE_NUMBER --body '${{ steps.inference.outputs.response }}'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ISSUE_NUMBER: ${{ github.event.issue.number }}
RESPONSE: ${{ steps.inference.outputs.response }}

79
.github/workflows/sync.yml vendored Executable file
View File

@ -0,0 +1,79 @@
name: Upstream Sync
permissions:
contents: write
actions: write
on:
schedule:
- cron: "0 * * * *"
workflow_dispatch:
inputs:
upstream_branch:
description: "上游仓库分支"
required: true
default: "main"
type: string
target_branch:
description: "目标仓库分支"
required: true
default: "main"
type: string
sync_strategy:
description: "同步策略"
required: true
default: "discard"
type: choice
options:
- discard # 强制同步:丢弃本地修改,完全复制上游 (推荐)
- merge # 尝试合并:如果有冲突会报错
env:
UPSTREAM_REPO: "cluntop/tvbox"
UPSTREAM_BRANCH: ${{ github.event.inputs.upstream_branch || 'main' }}
TARGET_BRANCH: ${{ github.event.inputs.target_branch || 'main' }}
SYNC_STRATEGY: ${{ github.event.inputs.sync_strategy || 'discard' }}
jobs:
sync_upstream:
name: Sync and Notify
runs-on: ubuntu-latest
if: ${{ github.event.repository.fork }}
steps:
- name: Checkout Target Repo
uses: actions/checkout@v6
with:
ref: ${{ env.TARGET_BRANCH }}
repository: cluntop/tvbox
token: ${{ secrets.GIT_TOKEN }}
persist-credentials: true
fetch-depth: 1
- name: Sync Upstream Changes
id: sync
uses: aormsby/Fork-Sync-With-Upstream-action@v3.4.1
with:
upstream_sync_repo: ${{ env.UPSTREAM_REPO }}
upstream_sync_branch: ${{ env.UPSTREAM_BRANCH }}
target_sync_branch: ${{ env.TARGET_BRANCH }}
target_repo_token: ${{ secrets.GITHUB_TOKEN }}
upstream_pull_args: ${{ env.SYNC_STRATEGY == 'discard' && '--allow-unrelated-histories --force' || '' }}
target_branch_push_args: ${{ env.SYNC_STRATEGY == 'discard' && '--force' || '' }}
test_mode: false
- name: Generate Summary
run: |
echo "### 🔄 同步报告 (Sync Report)" >> $GITHUB_STEP_SUMMARY
echo "| Metric | Value |" >> $GITHUB_STEP_SUMMARY
echo "| :--- | :--- |" >> $GITHUB_STEP_SUMMARY
echo "| **源仓库** | [${{ env.UPSTREAM_REPO }}](https://github.com/${{ env.UPSTREAM_REPO }}) |" >> $GITHUB_STEP_SUMMARY
echo "| **结果** | ${{ steps.sync.outputs.sync_status == 'success' && '✅ 成功 (Success)' || '❌ 失败 (Failed)' }} |" >> $GITHUB_STEP_SUMMARY
- name: Delete Old Workflows
uses: Mattraks/delete-workflow-runs@v2
with:
token: ${{ secrets.GITHUB_TOKEN }}
repository: ${{ github.repository }}
retain_days: 1
keep_minimum_runs: 3

70
.github/workflows/tv_run.yml vendored Executable file
View File

@ -0,0 +1,70 @@
name: Update data m3u
permissions:
contents: write
actions: write
on:
schedule:
- cron: '0,30 * * * *' # 每小时执行一次
workflow_dispatch: # 允许手动触发
env:
TZ: Asia/Shanghai
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
Update:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
repository: cluntop/tvbox
token: ${{ secrets.GIT_TOKEN }}
fetch-depth: 1
- name: Delete old workflow runs
uses: Mattraks/delete-workflow-runs@v2
with:
token: ${{ github.token }}
repository: ${{ github.repository }}
retain_days: 1
keep_minimum_runs: 3
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.10'
cache: 'pip'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f .github/requirements.txt ]; then pip install -r .github/requirements.txt; fi
pip install pandas requests
- name: Run IPTV script
run: |
python py/get_iptv.py
- name: Commit and Push changes
run: |
git config --global user.name "GitHub Actions"
git config --global user.email "actions@github.com"
if [ -n "$(git status --porcelain)" ]; then
echo "Changes detected, committing..."
git add .
git commit -m "Update m3u"
git pull --rebase origin main
git push origin main
else
echo "No changes detected, skipping push."
fi

90
.github/workflows/zip.yml vendored Executable file
View File

@ -0,0 +1,90 @@
name: Update zip package
permissions:
contents: write
actions: write
on:
schedule:
- cron: '0 * * * *' # 每小时运行一次
workflow_dispatch: # 允许手动触发
env:
TZ: Asia/Shanghai # 设置时区
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Delete old records
uses: Mattraks/delete-workflow-runs@v2
with:
token: ${{ secrets.GITHUB_TOKEN }}
repository: ${{ github.repository }}
retain_days: 1
keep_minimum_runs: 5
- name: Checkout target repo
uses: actions/checkout@v6
with:
repository: cluntop/tvbox
token: ${{ secrets.GIT_TOKEN }}
path: target-repo
fetch-depth: 1
- name: Process Xiaosa resource
run: |
echo "Starting Xiaosa download..."
mkdir -p temp_xiaosa
cd temp_xiaosa
wget --tries=3 --timeout=30 https://github.com/PizazzGY/NewTVBox/raw/main/%E5%8D%95%E7%BA%BF%E8%B7%AF.zip -O xiaosa.zip
unzip -q xiaosa.zip
mkdir -p ../target-repo/js/xiaosa
if [ -d "TVBoxOSC/tvbox" ]; then
cp -rf TVBoxOSC/tvbox/* ../target-repo/js/xiaosa/
echo "Xiaosa files copied successfully."
else
echo "Error: Xiaosa source directory structure changed."
ls -R
fi
cd ..
rm -rf temp_xiaosa
- name: Process Source Repo
run: |
git clone --depth 1 https://github.com/fish2018/PG.git source-repo
cd source-repo
ZIP_FILE=$(find . -type f -name "pg.[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]-[0-9][0-9][0-9][0-9].zip" | sort -r | head -n 1)
if [ -z "$ZIP_FILE" ]; then
echo "Warning: No matching zip file found in Source Repo."
else
echo "Found zip file: $ZIP_FILE"
unzip -o "$ZIP_FILE" -x "README.txt" -d ../target-repo/
echo "PG zip extracted to target repo."
fi
- name: Commit and push changes
working-directory: target-repo
run: |
git config user.name "GitHub Actions Bot"
git config user.email "actions@github.com"
git add .
if git diff --staged --quiet; then
echo "No changes to commit."
exit 0
else
COMMIT_MSG="Update zip"
git commit -m "$COMMIT_MSG"
git push origin HEAD:main
echo "Changes pushed successfully."
fi

63
.gitignore vendored Executable file
View File

@ -0,0 +1,63 @@
# Logs
*.log*
logs
# Dependency directories
node_modules/
package-lock.json
# Build output
dist/
dist-ssr/
*.local
bundle-analysis.html
# Environment variables
.env
.env.local
.env.development
.env.development.local
.env.test.local
.env.production.local
# Vite specific
.vite/
vite.config.js.timestamp-*
vite.config.ts.timestamp-*
# IDEs and editors
vite.config.dev.ts
.idea/
.vscode/*
!.vscode/extensions.json
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
# OS generated files
Thumbs.db
# Testing
coverage/
*.lcov
# Yarn
.yarn/*
!.yarn/patches
!.yarn/releases
!.yarn/plugins
!.yarn/sdks
!.yarn/versions
.pnp.*
# test
.git/objects/*
data/archive
.DS_Store
runtime/*
archive/
*.ipynb
*.pack
*.pyc

1
CNAME Normal file
View File

@ -0,0 +1 @@
clun.top

21
LICENSE Executable file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 ClunTop
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

1734
box.json Normal file

File diff suppressed because it is too large Load Diff

1320
fun.json Executable file

File diff suppressed because it is too large Load Diff

253
git.sh Normal file
View File

@ -0,0 +1,253 @@
#!/bin/env sh
#!/system/bin/sh
# 远程仓库地址
MY_REPO_URL="https://github.com/cluntop/tvbox.git"
# 日志文件路径
LOG_FILE="/data/data/bin.mt.plus/home/tvbox/.github/git.log"
# 颜色定义
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
# 确保日志目录存在
mkdir -p "$(dirname "$LOG_FILE")" 2>/dev/null
# 日志记录函数
log() {
if [ -w "$(dirname "$LOG_FILE")" ]; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE"
fi
}
# 提示函数
success_msg() { echo -e "${GREEN}$1${NC}"; log "成功: $1"; }
error_msg() { echo -e "${RED}$1${NC}"; log "错误: $1"; }
warn_msg() { echo -e "${YELLOW}$1${NC}"; log "警告: $1"; }
info_msg() { echo -e "${CYAN} $1${NC}"; }
# 检查网络
check_network() {
info_msg "检查网络连接..."
if ping -c 1 -W 2 github.com > /dev/null 2>&1 || ping -c 1 -W 2 baidu.com > /dev/null 2>&1; then
return 0
else
error_msg "网络连接失败"
return 1
fi
}
# 检查 Git
check_git() {
if ! command -v git > /dev/null 2>&1; then
error_msg "Git 未安装"
exit 1
fi
}
# 检查仓库状态
check_git_repo() {
if [ ! -d ".git" ]; then
warn_msg "当前目录不是 Git 仓库"
return 1
fi
return 0
}
# Root 权限检查
if [ "$(id -u)" -ne 0 ]; then
warn_msg "尝试获取 Root 权限..."
exec sudo "$0" "$@" 2>/dev/null
fi
# 初始化
check_git
# ================= 核心功能 =================
# 1. 初始化仓库
init_repo() {
if [ -d ".git" ]; then
error_msg "这里已经是 Git 仓库了"
return 1
fi
echo -e "准备在 ${YELLOW}$(pwd)${NC} 初始化..."
read -p "确认? (y/n): " confirm
if [ "$confirm" = "y" ] || [ "$confirm" = "Y" ]; then
git init && git checkout -b main 2>/dev/null || git branch -M main
success_msg "初始化完成"
# 初始化后自动询问是否添加远程仓库
read -p "是否立即关联远程仓库? (y/n): " add_remote
if [ "$add_remote" = "y" ] || [ "$add_remote" = "Y" ]; then
warehouse
fi
fi
}
# 2. 切换目录
change_work_dir() {
echo -e "\n${BLUE}=== 切换工作目录 ===${NC}"
echo "当前: $(pwd)"
read -p "输入新路径: " new_path
[ -z "$new_path" ] && return
if [ ! -d "$new_path" ]; then
read -p "目录不存在,创建? (y/n): " create
if [ "$create" = "y" ] || [ "$create" = "Y" ]; then
mkdir -p "$new_path" || { error_msg "创建失败"; return; }
else
return
fi
fi
cd "$new_path" || return
success_msg "已切换至: $(pwd)"
}
# 3. 设置固定远程仓库
warehouse() {
info_msg "设置远程仓库..."
if ! check_git_repo; then return 1; fi
target_url="$MY_REPO_URL"
# 检查当前配置
if git remote get-url origin > /dev/null 2>&1; then
current_url=$(git remote get-url origin)
if [ "$current_url" == "$target_url" ]; then
success_msg "远程仓库已正确配置: $target_url"
return 0
else
warn_msg "当前远程仓库: $current_url"
warn_msg "目标固定仓库: $target_url"
read -p "是否覆盖为固定仓库地址? (y/n): " confirm
if [ "$confirm" != "y" ] && [ "$confirm" != "Y" ]; then
return 0
fi
git remote remove origin
fi
fi
if git remote add origin "$target_url" 2>&1; then
success_msg "已绑定远程仓库: $target_url"
else
# 如果 add 失败,尝试 set-url
git remote set-url origin "$target_url" && success_msg "已更新远程仓库: $target_url"
fi
}
# 4. 拉取
branch() {
if ! check_git_repo; then return 1; fi
curr=$(git branch --show-current)
[ -z "$curr" ] && curr="main"
info_msg "拉取 origin/$curr ..."
if git pull origin "$curr" 2>&1; then
success_msg "拉取成功"
else
error_msg "拉取失败"
# 尝试自动关联
git branch --set-upstream-to=origin/"$curr" "$curr" 2>/dev/null
fi
}
# 5. 提交
submit() {
if ! check_git_repo; then return 1; fi
if [ -z "$(git status --porcelain)" ]; then
warn_msg "没有文件变动"
return 0
fi
curr=$(git branch --show-current)
[ -z "$curr" ] && curr="main"
info_msg "1. 拉取更新..."
git pull origin "$curr" > /dev/null 2>&1
info_msg "2. 添加文件..."
git add .
info_msg "3. 提交推送..."
msg="Update Up"
git commit -m "$msg"
if git push origin "$curr"; then
success_msg "推送成功"
else
warn_msg "推送失败,尝试强制关联..."
git push --set-upstream origin "$curr"
fi
}
# 6. 状态
state() {
[ -d ".git" ] && git status
}
# 7. 深度清理 (双重指令)
garbage() {
if ! check_git_repo; then return 1; fi
warn_msg "正在进行深度清理,请稍候..."
# 步骤1: 清理过期引用记录
echo "1/2: 清理 reflog..."
git reflog expire --expire=now --all 2>/dev/null
# 步骤2: 强力回收空间
echo "2/2: 压缩并修剪对象..."
if git gc --prune=now --aggressive 2>&1; then
success_msg "深度清理完成!"
# 显示大小
size=$(du -sh .git 2>/dev/null | cut -f1)
info_msg "当前仓库体积: $size"
else
error_msg "清理过程中出现问题"
fi
}
# 菜单
show_menu() {
clear 2>/dev/null || printf '\033[2J\033[H'
echo -e "${CYAN}=== Git 管理工具 ===${NC}"
echo -e "位置: ${YELLOW}$(pwd)${NC}"
echo -e "固定仓库: ${GREEN}$MY_REPO_URL${NC}"
echo ""
echo " 1) 提交 (一键三连)"
echo " 2) 拉取 (Pull)"
echo " 3) 绑定远程仓库 (Fix Remote)"
echo " 4) 查看状态"
echo " 5) 深度清理 (Reflog + GC)"
echo " 6) 初始化新仓库 (Init)"
echo " 7) 切换工作目录 (Cd)"
echo " 0) 退出"
echo ""
}
while true; do
show_menu
read -p "选项: " num
case $num in
1) submit ;;
2) branch ;;
3) warehouse ;;
4) state ;;
5) garbage ;;
6) init_repo ;;
7) change_work_dir ;;
0) exit 0 ;;
*) error_msg "无效选项" ;;
esac
echo ""
read -p "按回车继续..."
done

BIN
img/0.jpg Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 717 KiB

BIN
img/file.jpg Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

BIN
img/fongmi.jpg Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

BIN
img/loadin.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
img/logo.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.4 KiB

BIN
img/logo.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

BIN
img/pg.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

59
index.html Executable file
View File

@ -0,0 +1,59 @@
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>雨呢主页,TVBox,接口</title>
<link rel="stylesheet" type="text/css" href="./js/index.css" as="style">
<link rel="icon" type="image/ico" href="./img/logo.png">
<link rel="dns-prefetch" href="https://gh.clun.top">
<link rel="canonical" href="https://clun.top">
<meta name="keywords" content="雨呢主页,雨呢个人主页,雨呢网盘,疯子社网盘,聚合大全,资源大全,TVBox,接口">
<meta name="description" content="雨呢主页 - 雨呢个人主页.">
<meta property="og:title" content="雨呢主页 - TVBox 接口">
<meta property="og:type" content="profile">
<meta property="og:description" content="雨呢个人主页,TVBox,接口">
<meta property="og:image" content="./img/logo.png">
<meta property="og:url" content="https://clun.top/">
<meta property="og:locale" content="zh-CN">
<script src="./js/jquery.min.js" charset="utf-8"></script>
<script src="./js/index.min.js" charset="utf-8"></script>
</head>
<body>
<div class="container">
<h1>雨呢个人主页</h1>
<ul>
<li>
<a href="https://t.me/clun_tz" target="_blank">频道</a> /
<a href="https://t.me/clun_top" target="_blank">群组</a> TVBox 接口 GitHub
<a href="https://github.com/cluntop/tvbox" target="_blank">链接</a>
</li>
<li>很荣幸您能访问我的网站!</li>
<li>自用 <a href="https://clun.top/box.json" target="_blank">https://clun.top/box.json</a></li>
<li>
<strong>FongMi leanback:</strong>
<a href="https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/fongmi/apk/release/leanback-armeabi_v7a.apk" target="_blank">v7a</a> |
<a href="https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/fongmi/apk/release/leanback-arm64_v8a.apk" target="_blank">v8a</a>
</li>
<li>
<strong>okjack leanback:</strong>
<a href="https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/okjack/apk/release/leanback-armeabi_v7a.apk" target="_blank">v7a</a> |
<a href="https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/okjack/apk/release/leanback-arm64_v8a.apk" target="_blank">v8a</a>
</li>
<li>
<strong>FongMi mobile:</strong>
<a href="https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/fongmi/apk/release/mobile-armeabi_v7a.apk" target="_blank">v7a</a> |
<a href="https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/fongmi/apk/release/mobile-arm64_v8a.apk" target="_blank">v8a</a>
</li>
<li>
<strong>okjack mobile:</strong>
<a href="https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/okjack/apk/release/mobile-armeabi_v7a.apk" target="_blank">v7a</a> |
<a href="https://gh.clun.top/raw.githubusercontent.com/FongMi/Release/refs/heads/okjack/apk/release/mobile-arm64_v8a.apk" target="_blank">v8a</a>
</li>
</ul>
<p>
<a id="time"></a> | <a id="cfs">显示</a>
</p>
</div>
</body>
</html>

BIN
jar/fenghuang.jar Executable file

Binary file not shown.

BIN
jar/fm.jar Executable file

Binary file not shown.

BIN
jar/fty.jar Executable file

Binary file not shown.

BIN
jar/moyu.jar Executable file

Binary file not shown.

BIN
jar/netflav.jar Executable file

Binary file not shown.

BIN
jar/ok.jar Executable file

Binary file not shown.

BIN
jar/qf.jar Executable file

Binary file not shown.

BIN
jar/spider-woof.jar Executable file

Binary file not shown.

BIN
jar/svip.jar Executable file

Binary file not shown.

BIN
jar/wex.jar Executable file

Binary file not shown.

BIN
jar/xbpq.jar Executable file

Binary file not shown.

BIN
jar/xc.jar Executable file

Binary file not shown.

BIN
jar/xs.jar Executable file

Binary file not shown.

BIN
jar/yt-aa.jar Executable file

Binary file not shown.

BIN
jar/yt.jar Executable file

Binary file not shown.

BIN
jar/yt_xyz.jar Executable file

Binary file not shown.

BIN
jar/zx.jar Executable file

Binary file not shown.

186
js/4khdr.js Executable file
View File

@ -0,0 +1,186 @@
var rule = {
title:'4KHDR[磁]',
host:'https://www.4khdr.cn',
homeUrl: "/forum.php?mod=forumdisplay&fid=2&page=1",
url: '/forum.php?mod=forumdisplay&fid=2&filter=typeid&typeid=fyclass&page=fypage',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/search.php#searchsubmit=yes&srchtxt=**;post',
searchable:2,
quickSearch:1,
filterable:0,
headers:{
'User-Agent': 'PC_UA',
'Cookie':'http://127.0.0.1:9978/file:///tvbox/JS/lib/4khdr.txt',
},
timeout:5000,
class_name: "4K电影&4K美剧&4K华语&4K动画&4K纪录片&4K日韩印&蓝光电影&蓝光美剧&蓝光华语&蓝光动画&蓝光日韩印",
class_url:"3&8&15&6&11&4&29&31&33&32&34",
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'ul#waterfall li;a&&title;img&&src;div.auth.cl&&Text;a&&href',
一级:'ul#waterfall li;a&&title;img&&src;div.auth.cl&&Text;a&&href',
二级:{
title:"#thead_subject&&Text",
img:"img.zoom&&src",
desc:'td[id^="postmessage_"] font&&Text',
content:'td[id^="postmessage_"] font&&Text',
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let d = pdfa(html, 'div.pcb table.t_table a');
let tabsa = [];
let tabsq = [];
let tabsm = false;
let tabse = false;
d.forEach(function(it) {
let burl = pdfh(it, 'a&&href');
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
tabsa.push("阿里雲盤");
}else if (burl.startsWith("https://pan.quark.cn/s/")){
tabsq.push("夸克網盤");
}else if (burl.startsWith("magnet")){
tabsm = true;
}else if (burl.startsWith("ed2k")){
tabse = true;
}
});
if (tabsm === true){
TABS.push("磁力");
}
if (tabse === true){
TABS.push("電驢");
}
if (false && tabsa.length + tabsq.length > 1){
TABS.push("選擇右側綫路");
}
let tmpIndex;
tmpIndex=1;
tabsa.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
tmpIndex=1;
tabsq.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
log('4khdr TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
LISTS = [];
let d = pdfa(html, 'div.pcb table.t_table a');
let lista = [];
let listq = [];
let listm = [];
let liste = [];
d.forEach(function(it){
let burl = pdfh(it, 'a&&href');
let title = pdfh(it, 'a&&Text');
log('4khdr title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('4khdr burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
let loopresult = title + '$' + burl;
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
lista.push(loopresult);
}else if (burl.startsWith("https://pan.quark.cn/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
listq.push(loopresult);
}else if (burl.startsWith("magnet")){
listm.push(loopresult);
}else if (burl.startsWith("ed2k")){
liste.push(loopresult);
}
});
if (listm.length>0){
LISTS.push(listm);
}
if (liste.length>0){
LISTS.push(liste);
}
if (false && lista.length + listq.length > 1){
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
lista.forEach(function(it){
LISTS.push([it]);
});
listq.forEach(function(it){
LISTS.push([it]);
});
`,
},
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
if (rule_fetch_params.headers.Cookie.startsWith("http")){
rule_fetch_params.headers.Cookie=fetch(rule_fetch_params.headers.Cookie);
let cookie = rule_fetch_params.headers.Cookie;
setItem(RULE_CK, cookie);
};
log('4khdr search cookie>>>>>>>>>>>>>>>' + rule_fetch_params.headers.Cookie);
let new_host= HOST + '/search.php';
let new_html=request(new_host);
let formhash = pdfh(new_html, 'input[name="formhash"]&&value');
log("4khdr formhash>>>>>>>>>>>>>>>" + formhash);
let params = 'formhash=' + formhash + '&searchsubmit=yes&srchtxt=' + encodeURIComponent(KEY);
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
let postData = {
body: params
};
Object.assign(_fetch_params, postData);
log("4khdr search postData>>>>>>>>>>>>>>>" + JSON.stringify(_fetch_params));
let search_html = post( HOST + '/search.php?mod=forum', _fetch_params)
//log("4khdr search result>>>>>>>>>>>>>>>" + search_html);
let d=[];
let dlist = pdfa(search_html, 'div#threadlist ul li');
dlist.forEach(function(it){
let title = pdfh(it, 'h3&&Text');
if (searchObj.quick === true){
if (title.includes(KEY)){
title = KEY;
}
}
let img = "";
let content = pdfh(it, 'p:eq(2)&&Text');
let desc = pdfh(it, 'p:eq(3)&&Text');
let url = pd(it, 'a&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
})
});
setResult(d);
`,
}

61
js/88ball.js Normal file
View File

@ -0,0 +1,61 @@
var rule = {
title:'88看球',
// host:'http://www.88kanqiu.cc',
host:'http://www.88kanqiu.bar/',
url: "/match/fyclass/live",
searchUrl: "",
searchable: 0,
quickSearch: 0,
class_parse: ".nav-pills li;a&&Text;a&&href;/match/(\\d+)/live",
headers: {
"User-Agent": "PC_UA",
},
timeout: 5000,
play_parse: true,
pagecount:{"1":1,"2":1,"4":1,"22":1,"8":1,"9":1,"10":1,"14":1,"15":1,"12":1,"13":1,"16":1,"28":1,"7":1,"11":1,"33":1,"27":1,"23":1,"26":1,"3":1,"21":1,"18":1},
lazy: `js:
if(/embed=/.test(input)) {
let url = input.match(/embed=(.*?)&/)[1];
url = base64Decode(url);
input = {
jx:0,
url: url.split('#')[0],
parse: 0
}
} else if (/\?url=/.test(input)){
input = {
jx:0,
url: input.split('?url=')[1].split('#')[0],
parse: 0
}
} else {
input
}
`,
limit: 6,
double: false,
推荐: "*",
一级: ".list-group .group-game-item;.d-none&&Text;img&&src;.btn&&Text;a&&href",
二级: {
title: ".game-info-container&&Text;.customer-navbar-nav li&&Text",
img: "img&&src",
desc: ";;;div.team-name:eq(0)&&Text;div.team-name:eq(1)&&Text",
content: "div.game-time&&Text",
tabs: "js:TABS=['实时直播']",
lists: `js:
LISTS = [];
let html = request(input.replace('play', 'play-url'));
let pdata = JSON.parse(html).data;
pdata = pdata.slice(6);
pdata = pdata.slice(0, -2);
pdata = base64Decode(pdata);
// log(pdata);
let jo = JSON.parse(pdata).links;
let d = jo.map(function (it) {
return it.name + '$' + urlencode(it.url)
});
LISTS.push(d)
`,
},
搜索: "",
};

203
js/97tvs.js Executable file
View File

@ -0,0 +1,203 @@
var rule = {
title:'高清MP4吧',
host:'https://www.97tvs.com',
homeUrl: '/',
url: '/fyclass/page/fypage?',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/?s=**',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': 'PC_UA',
'Cookie':'',
'Referer': 'http://www.97tvs.com/'
},
图片来源:'@Headers={"Accept":"*/*","Referer":"https://www.97tvs.com/","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36"}',
timeout:5000,
class_name: "动作片&科幻片&爱情片&喜剧片&剧情片&惊悚片&战争片&灾难片&罪案片&动画片&综艺&电视剧",
class_url: "action&science&love&comedy&story&thriller&war&disaster&crime&cartoon&variety&sitcoms",
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
let html = request(input);
let list = pdfa(html, 'div.mainleft ul#post_container li');
list.forEach(it => {
d.push({
title: pdfh(it, 'div.thumbnail img&&alt'),
desc: pdfh(it, 'div.info&&span.info_date&&Text') + ' / ' + pdfh(it, 'div.info&&span.info_category&&Text'),
pic_url: pd(it, 'div.thumbnail img&&src', HOST),
url: pd(it, 'div.thumbnail&&a&&href',HOST)
});
});
setResult(d);
`,
一级:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
let html = request(input);
let list = pdfa(html, 'div.mainleft ul#post_container li');
list.forEach(it => {
d.push({
title: pdfh(it, 'div.thumbnail img&&alt'),
desc: pdfh(it, 'div.info&&span.info_date&&Text') + ' / ' + pdfh(it, 'div.info&&span.info_category&&Text'),
pic_url: pd(it, 'div.thumbnail img&&src', HOST),
url: pd(it, 'div.thumbnail&&a&&href',HOST)
});
})
setResult(d);
`,
二级:{
title:"div.article_container h1&&Text",
img:"div#post_content img&&src",
desc:"div#post_content&&Text",
content:"div#post_content&&Text",
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let d = pdfa(html, 'div#post_content p');
let tabsa = [];
let tabsq = [];
let tabsm = false;
let tabse = false;
let tabm3u8 = [];
d.forEach(function(it) {
let burl = pdfh(it, 'a&&href');
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
tabsa.push("阿里雲盤");
}else if (burl.startsWith("https://pan.quark.cn/s/")){
tabsq.push("夸克網盤");
}else if (burl.startsWith("magnet")){
tabsm = true;
}else if (burl.startsWith("ed2k")){
tabse = true;
}
});
if (tabsm === true){
TABS.push("磁力");
}
if (tabse === true){
TABS.push("電驢");
}
let tmpIndex;
tmpIndex=1;
tabsa.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
tmpIndex=1;
tabsq.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
tabm3u8.forEach(function(it){
TABS.push(it);
});
log('97tvs TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
LISTS = [];
let d = pdfa(html, 'div#post_content p');
let lista = [];
let listq = [];
let listm = [];
let liste = [];
let listm3u8 = {};
d.forEach(function(it){
let burl = pdfh(it, 'a&&href');
let title = pdfh(it, 'a&&Text');
log('97tvs title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('97tvs burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
let loopresult = title + '$' + burl;
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = "push://" + burl;
}
loopresult = title + '$' + burl;
lista.push(loopresult);
}else if (burl.startsWith("https://pan.quark.cn/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = "push://" + burl;
}
loopresult = title + '$' + burl;
listq.push(loopresult);
}else if (burl.startsWith("magnet")){
listm.push(loopresult);
}else if (burl.startsWith("ed2k")){
liste.push(loopresult);
}
});
if (listm.length>0){
LISTS.push(listm);
}
if (liste.length>0){
LISTS.push(liste);
}
lista.forEach(function(it){
LISTS.push([it]);
});
listq.forEach(function(it){
LISTS.push([it]);
});
for ( const key in listm3u8 ){
if (listm3u8.hasOwnProperty(key)){
LISTS.push(listm3u8[key]);
}
};
`,
},
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let search_html = request(input)
//log("97tvs search result>>>>>>>>>>>>>>>" + search_html);
let d=[];
let dlist = pdfa(search_html, 'div.mainleft ul#post_container li');
dlist.forEach(function(it){
let title = pdfh(it, 'div.thumbnail img&&alt').replace( /(<([^>]+)>)/ig, '');
if (title.includes(KEY)){
if (searchObj.quick === true){
title = KEY;
}
let img = pd(it, 'div.thumbnail img&&src', HOST);
let content = pdfh(it, 'div.article div.entry_post&&Text');
let desc = pdfh(it, 'div.info&&span.info_date&&Text');
let url = pd(it, 'div.thumbnail&&a&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
});
}
});
setResult(d);
`,
}

450
js/aa.json Executable file
View File

@ -0,0 +1,450 @@
{
"spider": "../jar/yt-aa.jar",
"wallpaper": "https://tuapi.eees.cc/api.php?category=fengjing&type=302",
"sites": [
{
"key": "cctv",
"name": "cctv",
"type": 1,
"api": "http://zhangqun19.serv00.net/cctv.php",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "听书go",
"name": "听书go",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=11",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "小苹果",
"name": "小苹果",
"type": 4,
"api": "http://zhangqun19.serv00.net/pingguo.php",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "采集全集",
"name": "采集全集",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=12",
"ext": "0",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "服务器油管go",
"name": "服务器油管go",
"type": 4,
"api": "http://127.0.0.1:1988",
"style": {
"type": "rect",
"ratio": 1.33
},
"searchable": 1,
"changeable": 1
},
{
"key": "油管清单播放",
"name": "油管清单播放",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=9",
"ext": "/storage/emulated/0/lz/json/油管/",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "影视清单播放",
"name": "影视清单播放",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=9",
"ext": "/storage/emulated/0/lz/json/影视/",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "其他清单播放",
"name": "其他清单播放",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=9",
"ext": "/storage/emulated/0/lz/json/其他/",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "油管文件播放",
"name": "油管文件播放",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=8",
"ext": "/storage/emulated/0/lz/wj/油管/",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "影视文件播放",
"name": "影视文件播放",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=8",
"ext": "/storage/emulated/0/lz/wj/影视/",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "直播文件播放",
"name": "直播文件播放",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=8",
"ext": "/storage/emulated/0/lz/wj/直播/",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "音乐文件播放",
"name": "音乐文件播放",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=8",
"ext": "/storage/emulated/0/lz/wj/音乐/",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "欧乐影院",
"name": "欧乐影院",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=4",
"searchable": 1,
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "采集集合",
"name": "采集集合",
"type": 1,
"api": "http://127.0.0.1:1988/lb?lb=3",
"searchable": 1,
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "短剧",
"name": "短剧",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=2",
"searchable": 1,
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
},
{
"key": "哔哩go",
"name": "哔哩go",
"type": 4,
"api": "http://127.0.0.1:1988/lb?lb=10",
"style": {
"type": "rect",
"ratio": 1.33
},
"changeable": 1
}
],
"parses": [
{
"name": "解析聚合",
"type": 3,
"url": "Web"
},
{
"name": "777",
"type": 0,
"url": "https://jx.777jiexi.com/player/?url="
},
{
"name": "jsonplayer",
"type": 0,
"url": "https://jx.jsonplayer.com/player/?url="
},
{
"name": "xmflv",
"type": 0,
"url": "https://jx.xmflv.com/?url="
},
{
"name": "公众号:六趣",
"type": 1,
"url": "http://kmp.us.kg/api/?key=A2gODYCEHBd4tVLsbv&url=",
"ext": {
"flag": [
"qq",
"腾讯",
"qiyi",
"爱奇艺",
"奇艺",
"youku",
"优酷",
"sohu",
"搜狐",
"letv",
"乐视",
"mgtv",
"芒果",
"tnmb",
"seven",
"bilibili",
"1905",
"NetFilx"
],
"header": {
"User-Agent": "okhttp/4.9.1"
}
}
}
],
"flags": [
"youku",
"tudou",
"qq",
"qiyi",
"iqiyi",
"leshi",
"letv",
"sohu",
"imgo",
"mgtv",
"bilibili",
"pptv",
"PPTV",
"migu"
],
"rules": [
{
"name": "proxy",
"hosts": [
"raw.githubusercontent.com",
"googlevideo.com",
"cdn.v82u1l.com",
"cdn.iz8qkg.com",
"cdn.kin6c1.com",
"c.biggggg.com",
"c.olddddd.com",
"haiwaikan.com",
"www.histar.tv",
"youtube.com",
"uhibo.com",
".*boku.*",
".*nivod.*",
".*ulivetv.*"
]
},
{
"name": "海外看",
"hosts": [
"haiwaikan"
],
"regex": [
"10.0099",
"10.3333",
"16.0599",
"8.1748",
"10.85"
]
},
{
"name": "索尼",
"hosts": [
"suonizy"
],
"regex": [
"15.1666",
"15.2666"
]
},
{
"name": "暴風",
"hosts": [
"bfzy"
],
"regex": [
"#EXT-X-DISCONTINUITY\\r*\\n*#EXTINF:3,[\\s\\S]*?#EXT-X-DISCONTINUITY"
]
},
{
"name": "测试",
"hosts": [
"zhangqun66.serv00.net"
],
"regex": [
"Smartv.php?id="
]
},
{
"name": "星星",
"hosts": [
"aws.ulivetv.net"
],
"regex": [
"#EXT-X-DISCONTINUITY\\r*\\n*#EXTINF:8,[\\s\\S]*?#EXT-X-DISCONTINUITY"
]
},
{
"name": "量子",
"hosts": [
"vip.lz",
"hd.lz",
"v.cdnlz"
],
"regex": [
"18.5333"
]
},
{
"name": "非凡",
"hosts": [
"vip.ffzy",
"hd.ffzy"
],
"regex": [
"25.0666"
]
},
{
"name": "火山嗅探",
"hosts": [
"huoshan.com"
],
"regex": [
"item_id="
]
},
{
"name": "抖音嗅探",
"hosts": [
"douyin.com"
],
"regex": [
"is_play_url="
]
},
{
"name": "農民嗅探",
"hosts": [
"toutiaovod.com"
],
"regex": [
"video/tos/cn"
]
}
],
"doh": [
{
"name": "Google",
"url": "https://dns.google/dns-query",
"ips": [
"8.8.4.4",
"8.8.8.8"
]
},
{
"name": "Cloudflare",
"url": "https://cloudflare-dns.com/dns-query",
"ips": [
"1.1.1.1",
"1.0.0.1",
"2606:4700:4700::1111",
"2606:4700:4700::1001"
]
},
{
"name": "AdGuard",
"url": "https://dns.adguard.com/dns-query",
"ips": [
"94.140.14.140",
"94.140.14.141"
]
},
{
"name": "DNSWatch",
"url": "https://resolver2.dns.watch/dns-query",
"ips": [
"84.200.69.80",
"84.200.70.40"
]
},
{
"name": "Quad9",
"url": "https://dns.quad9.net/dns-query",
"ips": [
"9.9.9.9",
"149.112.112.112"
]
}
],
"lives": [
{
"name": "XingHuo",
"url": "https://json.doube.eu.org/XingHuo.txt",
"header": {
"Referer": "https://www.kds.tw/"
}
},
{
"name": "MQiTV",
"api": "csp_MQiTV",
"jar": "https://raw.githubusercontent.com/sqspot/tac/refs/heads/main/jar/fmMQiTV.jar",
"ext": "https://59.125.210.231:4433",
"playerType": 1,
"epg": "http://epg.112114.xyz/?ch={name}&date={date}"
},
{
"name": "肥羊国内直播",
"type": 3,
"api": "csp_Feiyang",
"url": "tv.m3u",
"ext": "https://raw.githubusercontent.com/lystv/fmapp/ok/apk/allinone/v7/allinone;md5;https://raw.githubusercontent.com/lystv/fmapp/ok/apk/allinone/v7/md5",
"jar": "https://raw.githubusercontent.com/FongMi/CatVodSpider/main/jar/custom_spider.jar"
}
]
}

126
js/aipanso.js Executable file
View File

@ -0,0 +1,126 @@
var rule = {
title:'爱盘搜[夸]',
host:'https://aipanso.com',
homeUrl:'/',
url: '/forum-fyclass-fypage.html?',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/search?page=fypage&s=1&t=-1&k=**',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': PC_UA,
'Accept': '*/*',
'Referer': 'https://aipanso.com/'
},
timeout:5000,
class_name:'',
class_url:'',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'',
一级:'',
二级:{
title:"van-row h3&&Text",
img:"",
desc:"van-row h3&&Text",
content:"van-row h3&&Text",
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
TABS.push("夸克網盤");
log('meijumi TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
LISTS=[];
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let requestHeaders = {
withHeaders: true,
redirect: 0,
headers:{
Referer: MY_URL
}
};
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
Object.assign(_fetch_params, requestHeaders);
let new_html = request ( MY_URL.replace("/s/","/cv/"), _fetch_params);
let json=JSON.parse(new_html);
let redirectUrl = "";
if (json.hasOwnProperty("Location")){
redirectUrl = json["Location"];
}else if (json.hasOwnProperty("location")){
redirectUrl = json["location"];
}
let title = pdfh(html, 'van-row h3&&Text');
LISTS.push([title + '$' + 'push://' + redirectUrl]);
`,
},
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
log("aipanso enter search >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" + KEY);
let withHeaders = {
withHeaders: true
};
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
Object.assign(_fetch_params, withHeaders);
log('aipanso search params >>>>>>>>>>>>>>>>>>>>>' + JSON.stringify(_fetch_params));
let new_html=request(rule.homeUrl + 'search?page=' + MY_PAGE + '&s=1&t=-1&k=' + encodeURIComponent(KEY) , _fetch_params);
//log('aipanso search new_html >>>>>>>>>>>>>>>>>>>>>' + new_html);
let json=JSON.parse(new_html);
let setCk=Object.keys(json).find(it=>it.toLowerCase()==="set-cookie");
let cookie="";
if (typeof setCk !== "undefined"){
let d=[];
for(const key in json[setCk]){
if (typeof json[setCk][key] === "string"){
log("aipanso header setCk key>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" + json[setCk][key] + " " + (typeof json[setCk][key]));
d.push(json[setCk][key].split(";")[0]);
}
}
cookie=d.join(";");
setItem(RULE_CK, cookie);
fetch_params.headers.Cookie=cookie;
rule_fetch_params.headers.Cookie=cookie;
}
log('aipanso search cookie >>>>>>>>>>>>>>>>>>>>>' + cookie);
//log('aipanso search body >>>>>>>>>>>>>>>>>>>>>' + json['body'].substring(4096));
new_html = json['body'];
let d=[];
let dlist = pdfa(new_html, 'van-row:has(>a[href^="/s/"])');
dlist.forEach(function(it){
let title = pdfh(it, 'van-card template&&Text');
if (title.includes(KEY)){
if (searchObj.quick === true){
title = KEY;
}
let img = pd(it, 'van-card&&thumb', HOST);
let content = pdfh(it, 'van-card template:eq(1)&&Text');
let desc = pdfh(it, 'van-card template:eq(1)&&Text');
let url = pd(it, 'a&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
})
}
});
setResult(d);
`,
}

92
js/alist.json Executable file
View File

@ -0,0 +1,92 @@
{
"drives": [
{
"name": "雨呢",
"server": "https://pan.clun.top"
},
{
"name": "触光",
"server": "https://pan.ichuguang.com"
},
{
"name": "魔都云",
"server": "https://cdn.modupan.com"
},
{
"name": "MCCI",
"server": "https://buck.xk.ee",
"login": {
"username": "udian6",
"password": "udian6"
}
},
{
"name": "亿苯",
"server": "https://pan.lm379.cn"
},
{
"name": "秋雨",
"server": "https://share.qiuyu.org"
},
{
"name": "小雅",
"server": "https://alist.xiaoya.pro"
},
{
"name": "日负斗金",
"server": "https://asca0121.toc.icu"
},
{
"name": "短剧库",
"server": "https://cdn.bull369.cloud"
},
{
"name": "七米藍",
"server": "https://al.chirmyram.com"
},
{
"name": "初心",
"server": "https://cxpan.xyz"
},
{
"name": "趣盘",
"server": "https://pan.mediy.cn"
},
{
"name": "电瞄",
"server": "https://pan.110014.xyz"
},
{
"name": "ZhuiFan",
"server": "https://zhuifan.link"
},
{
"name": "Apachec",
"server": "https://w2.apachecn.org"
},
{
"name": "ffssaa7",
"server": "https://alist.ffssaa7.site"
},
{
"name": "Hᴇᴀᴇɴ",
"server": "http://7heaven.eu.org"
},
{
"name": "Aeahhe",
"server": "https://www.yeahhe.online"
},
{
"name": "ECVE",
"server": "https://pan.ecve.cn"
},
{
"name": "一个小站",
"server": "https://alist.ygxz.xyz"
},
{
"name": "云顶天宫",
"server": "https://file.i80k.com"
}
]
}

149
js/apple.js Executable file
View File

@ -0,0 +1,149 @@
let host = 'http://asp.xpgtv.com';
let headers = {
"User-Agent": "okhttp/3.12.11"
};
async function init(cfg) {}
function getList(data) {
let videos = [];
data.forEach(vod => {
let r = vod.updateInfo ? "更新至" + vod.updateInfo : "";
videos.push({
"vod_id": vod.id.toString(),
"vod_name": vod.name,
"vod_pic": vod.pic,
"vod_remarks": r || (vod.score ? vod.score.toString() : "")
});
});
return videos;
}
// ------------------- 兼容 JSON -------------------
function parseResp(resp) {
return typeof resp.content === "string" ? JSON.parse(resp.content) : resp.content;
}
async function home(filter) {
let url = host + "/api.php/v2.vod/androidtypes";
let resp = await req(url, { headers: headers });
let data = parseResp(resp);
let dy = { "classes": "类型", "areas": "地区", "years": "年份", "sortby": "排序" };
let demos = ['时间', '人气', '评分'];
let classes = [];
let filters = {};
data.data.forEach(item => {
let typeId = item.type_id.toString();
classes.push({ "type_name": item.type_name, "type_id": typeId });
item['sortby'] = ['updatetime', 'hits', 'score'];
let filterArray = [];
for (let key in dy) {
if (item[key] && item[key].length > 1) {
let values = [];
item[key].forEach((val, idx) => {
let vStr = val.toString().trim();
if (vStr !== "") {
values.push({ "n": key === "sortby" ? demos[idx] : vStr, "v": vStr });
}
});
let fKey = key === "areas" ? "areaes" : (key === "years" ? "yeares" : key);
filterArray.push({ "key": fKey, "name": dy[key], "value": values });
}
}
filters[typeId] = filterArray;
});
return JSON.stringify({ class: classes, filters: filters });
}
async function homeVod() {
let url = host + "/api.php/v2.main/androidhome";
let resp = await req(url, { headers: headers });
let data = parseResp(resp);
let videos = [];
data.data.list.forEach(i => { videos = videos.concat(getList(i.list)); });
return JSON.stringify({ list: videos });
}
async function category(tid, pg, filter, extend) {
let params = {
"page": pg,
"type": tid,
"area": extend.areaes || '',
"year": extend.yeares || '',
"sortby": extend.sortby || '',
"class": extend.classes || ''
};
let query = Object.keys(params).filter(k => params[k] !== '').map(k => k + '=' + encodeURIComponent(params[k])).join('&');
let url = host + '/api.php/v2.vod/androidfilter10086?' + query;
let resp = await req(url, { headers: headers });
let data = parseResp(resp);
return JSON.stringify({ list: getList(data.data), page: parseInt(pg), pagecount: 9999, limit: 90, total: 999999 });
}
async function detail(id) {
let url = host + '/api.php/v3.vod/androiddetail2?vod_id=' + id;
let resp = await req(url, { headers: headers });
let data = parseResp(resp).data;
// 过滤掉包含“及时雨”的选集
let filteredUrls = data.urls.filter(i => !i.key.includes("及时雨"));
let playlist = filteredUrls.map(i => i.key + '$' + i.url).join('#');
let vod = {
'vod_id': id,
'vod_name': data.name,
'vod_year': data.year,
'vod_area': data.area,
'vod_lang': data.lang,
'type_name': data.className,
'vod_actor': data.actor,
'vod_director': data.director,
'vod_content': data.content,
'vod_play_from': '书生精选线路',
'vod_play_url': playlist
};
return JSON.stringify({ list: [vod] });
}
async function search(wd, quick, pg) {
let page = pg || '1';
let url = host + '/api.php/v2.vod/androidsearch10086?page=' + page + '&wd=' + encodeURIComponent(wd);
let resp = await req(url, { headers: headers });
let data = parseResp(resp);
return JSON.stringify({ list: getList(data.data), page: page });
}
async function play(flag, id, flags) {
let playUrl = id;
if (!id.startsWith('http')) {
playUrl = "http://c.xpgtv.net/m3u8/" + id + ".m3u8";
}
const playHeader = {
'user_id': 'XPGBOX',
'token2': 'SnAXiSW8vScXE0Z9aDOnK5xffbO75w1+uPom3WjnYfVEA1oWtUdi2Ihy1N8=',
'version': 'XPGBOX com.phoenix.tv1.5.7',
'hash': 'd78a',
'screenx': '2345',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36',
'token': 'ElEDlwCVgXcFHFhddiq2JKteHofExRBUrfNlmHrWetU3VVkxnzJAodl52N9EUFS+Dig2A/fBa/V9RuoOZRBjYvI+GW8kx3+xMlRecaZuECdb/3AdGkYpkjW3wCnpMQxf8vVeCz5zQLDr8l8bUChJiLLJLGsI+yiNskiJTZz9HiGBZhZuWh1mV1QgYah5CLTbSz8=',
'timestamp': '1743060300',
'screeny': '1065',
'Accept': '*/*',
'Connection': 'keep-alive'
};
return JSON.stringify({
parse: 0,
url: playUrl,
header: playHeader
});
}
export default { init, home, homeVod, category, detail, search, play };

61
js/cilixiong.js Executable file
View File

@ -0,0 +1,61 @@
var rule = {
title:'磁力熊[磁]',
host:'https://www.cilixiong.com',
homeUrl:'/',
url: '/fyclassfyfilter-(fypage-1).html',
//host:'http://127.0.0.1:10079',
//homeUrl:'/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.cilixiong.com',
//url:'/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.cilixiong.com/fyclassfyfilter-(fypage-1).html',
filter_url:'-{{fl.class or "0"}}-{{fl.area or "0"}}',
filter:{
"1":[{"key":"class","name":"类型","value":[{"n":"全部","v":"0"},{"n":"剧情","v":"1"},{"n":"喜剧","v":"2"},{"n":"惊悚","v":"3"},{"n":"动作","v":"4"},{"n":"爱情","v":"5"},{"n":"犯罪","v":"6"},{"n":"恐怖","v":"7"},{"n":"冒险","v":"8"},{"n":"悬疑","v":"9"},{"n":"科幻","v":"10"},{"n":"家庭","v":"11"},{"n":"奇幻","v":"12"},{"n":"动画","v":"13"},{"n":"战争","v":"14"},{"n":"历史","v":"15"},{"n":"传记","v":"16"},{"n":"音乐","v":"17"},{"n":"歌舞","v":"18"},{"n":"运动","v":"19"},{"n":"西部","v":"20"},{"n":"灾难","v":"21"},{"n":"古装","v":"22"},{"n":"情色","v":"23"},{"n":"同性","v":"24"},{"n":"儿童","v":"25"},{"n":"纪录片","v":"26"}]},{"key":"area","name":"地区","value":[{"n":"全部","v":"0"},{"n":"大陆","v":"1"},{"n":"香港","v":"2"},{"n":"台湾","v":"3"},{"n":"美国","v":"4"},{"n":"日本","v":"5"},{"n":"韩国","v":"6"},{"n":"英国","v":"7"},{"n":"法国","v":"8"},{"n":"德国","v":"9"},{"n":"印度","v":"10"},{"n":"泰国","v":"11"},{"n":"丹麦","v":"12"},{"n":"瑞典","v":"13"},{"n":"巴西","v":"14"},{"n":"加拿大","v":"15"},{"n":"俄罗斯","v":"16"},{"n":"意大利","v":"17"},{"n":"比利时","v":"18"},{"n":"爱尔兰","v":"19"},{"n":"西班牙","v":"20"},{"n":"澳大利亚","v":"21"},{"n":"波兰","v":"22"},{"n":"土耳其","v":"23"},{"n":"越南","v":"24"}]}],
"2":[{"key":"class","name":"类型","value":[{"n":"全部","v":"0"},{"n":"剧情","v":"1"},{"n":"喜剧","v":"2"},{"n":"惊悚","v":"3"},{"n":"动作","v":"4"},{"n":"爱情","v":"5"},{"n":"犯罪","v":"6"},{"n":"恐怖","v":"7"},{"n":"冒险","v":"8"},{"n":"悬疑","v":"9"},{"n":"科幻","v":"10"},{"n":"家庭","v":"11"},{"n":"奇幻","v":"12"},{"n":"动画","v":"13"},{"n":"战争","v":"14"},{"n":"历史","v":"15"},{"n":"传记","v":"16"},{"n":"音乐","v":"17"},{"n":"歌舞","v":"18"},{"n":"运动","v":"19"},{"n":"西部","v":"20"},{"n":"灾难","v":"21"},{"n":"古装","v":"22"},{"n":"情色","v":"23"},{"n":"同性","v":"24"},{"n":"儿童","v":"25"},{"n":"纪录片","v":"26"}]},{"key":"area","name":"地区","value":[{"n":"全部","v":"0"},{"n":"大陆","v":"1"},{"n":"香港","v":"2"},{"n":"台湾","v":"3"},{"n":"美国","v":"4"},{"n":"日本","v":"5"},{"n":"韩国","v":"6"},{"n":"英国","v":"7"},{"n":"法国","v":"8"},{"n":"德国","v":"9"},{"n":"印度","v":"10"},{"n":"泰国","v":"11"},{"n":"丹麦","v":"12"},{"n":"瑞典","v":"13"},{"n":"巴西","v":"14"},{"n":"加拿大","v":"15"},{"n":"俄罗斯","v":"16"},{"n":"意大利","v":"17"},{"n":"比利时","v":"18"},{"n":"爱尔兰","v":"19"},{"n":"西班牙","v":"20"},{"n":"澳大利亚","v":"21"},{"n":"波兰","v":"22"},{"n":"土耳其","v":"23"},{"n":"越南","v":"24"}]}]
},
searchUrl: '/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.cilixiong.com/e/search/index.php#classid=1,2&show=title&tempid=1&keyboard=**;post',
searchable:0,
quickSearch:0,
filterable:1,
headers:{
'User-Agent': 'MOBILE_UA'
},
timeout:5000,
class_name:'电影&剧集&豆瓣电影Top250&IMDB Top250&高分悬疑片&高分喜剧片&高分传记片&高分爱情片&高分犯罪片&高分恐怖片&高分冒险片&高分武侠片&高分奇幻片&高分历史片&高分战争片&高分歌舞片&高分灾难片&高分情色片&高分西部片&高分音乐片&高分科幻片&高分动作片&高分动画片&高分纪录片&冷门佳片',
class_url:'1&2&/top250/&/s/imdbtop250/&/s/suspense/&/s/comedy/&/s/biopic/&/s/romance/&/s/crime/&/s/horror/&/s/adventure/&/s/martial/&/s/fantasy/&/s/history/&/s/war/&/s/musical/&/s/disaster/&/s/erotic/&/s/west/&/s/music/&/s/sci-fi/&/s/action/&/s/animation/&/s/documentary/&/s/unpopular/',
play_parse:false,
lazy:'',
limit:6,
推荐: `js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
var d = [];
var html = request(input);
var list = pdfa(html, 'body&&.col');
list.forEach(it => {
d.push({
title: pdfh(it, 'h2&&Text'),
desc: pdfh(it, '.me-auto&&Text') + '分 / ' + pdfh(it, '.small&&Text'),
pic_url: pd(it, '.card-img&&style')
});
})
setResult(d);
`,
一级: `js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
var d = [];
if (MY_CATE !== '1' && MY_CATE !== '2') {
let turl = (MY_PAGE === 1)? 'index' : 'index_'+ MY_PAGE;
input = rule.homeUrl + MY_CATE + turl + '.html';
}
var html = request(input);
var list = pdfa(html, 'body&&.col');
list.forEach(it => {
d.push({
title: pdfh(it, 'h2&&Text'),
desc: pdfh(it, '.me-auto&&Text') + '分 / ' + pdfh(it, '.small&&Text'),
pic_url: pdfh(it, '.card-img&&style')
});
})
setResult(d);
`,
二级:'',
搜索:'',
}

61
js/cilixiongp.js Executable file
View File

@ -0,0 +1,61 @@
var rule = {
title:'磁力熊[磁]',
//host:'https://www.cilixiong.com',
//homeUrl:'/',
//url: '/fyclassfyfilter-(fypage-1).html',
host:'http://127.0.0.1:10079',
homeUrl:'/p/0/127.0.0.1:10072/https://www.cilixiong.com',
url:'/p/0/127.0.0.1:10072/https://www.cilixiong.com/fyclassfyfilter-(fypage-1).html',
filter_url:'-{{fl.class or "0"}}-{{fl.area or "0"}}',
filter:{
"1":[{"key":"class","name":"类型","value":[{"n":"全部","v":"0"},{"n":"剧情","v":"1"},{"n":"喜剧","v":"2"},{"n":"惊悚","v":"3"},{"n":"动作","v":"4"},{"n":"爱情","v":"5"},{"n":"犯罪","v":"6"},{"n":"恐怖","v":"7"},{"n":"冒险","v":"8"},{"n":"悬疑","v":"9"},{"n":"科幻","v":"10"},{"n":"家庭","v":"11"},{"n":"奇幻","v":"12"},{"n":"动画","v":"13"},{"n":"战争","v":"14"},{"n":"历史","v":"15"},{"n":"传记","v":"16"},{"n":"音乐","v":"17"},{"n":"歌舞","v":"18"},{"n":"运动","v":"19"},{"n":"西部","v":"20"},{"n":"灾难","v":"21"},{"n":"古装","v":"22"},{"n":"情色","v":"23"},{"n":"同性","v":"24"},{"n":"儿童","v":"25"},{"n":"纪录片","v":"26"}]},{"key":"area","name":"地区","value":[{"n":"全部","v":"0"},{"n":"大陆","v":"1"},{"n":"香港","v":"2"},{"n":"台湾","v":"3"},{"n":"美国","v":"4"},{"n":"日本","v":"5"},{"n":"韩国","v":"6"},{"n":"英国","v":"7"},{"n":"法国","v":"8"},{"n":"德国","v":"9"},{"n":"印度","v":"10"},{"n":"泰国","v":"11"},{"n":"丹麦","v":"12"},{"n":"瑞典","v":"13"},{"n":"巴西","v":"14"},{"n":"加拿大","v":"15"},{"n":"俄罗斯","v":"16"},{"n":"意大利","v":"17"},{"n":"比利时","v":"18"},{"n":"爱尔兰","v":"19"},{"n":"西班牙","v":"20"},{"n":"澳大利亚","v":"21"},{"n":"波兰","v":"22"},{"n":"土耳其","v":"23"},{"n":"越南","v":"24"}]}],
"2":[{"key":"class","name":"类型","value":[{"n":"全部","v":"0"},{"n":"剧情","v":"1"},{"n":"喜剧","v":"2"},{"n":"惊悚","v":"3"},{"n":"动作","v":"4"},{"n":"爱情","v":"5"},{"n":"犯罪","v":"6"},{"n":"恐怖","v":"7"},{"n":"冒险","v":"8"},{"n":"悬疑","v":"9"},{"n":"科幻","v":"10"},{"n":"家庭","v":"11"},{"n":"奇幻","v":"12"},{"n":"动画","v":"13"},{"n":"战争","v":"14"},{"n":"历史","v":"15"},{"n":"传记","v":"16"},{"n":"音乐","v":"17"},{"n":"歌舞","v":"18"},{"n":"运动","v":"19"},{"n":"西部","v":"20"},{"n":"灾难","v":"21"},{"n":"古装","v":"22"},{"n":"情色","v":"23"},{"n":"同性","v":"24"},{"n":"儿童","v":"25"},{"n":"纪录片","v":"26"}]},{"key":"area","name":"地区","value":[{"n":"全部","v":"0"},{"n":"大陆","v":"1"},{"n":"香港","v":"2"},{"n":"台湾","v":"3"},{"n":"美国","v":"4"},{"n":"日本","v":"5"},{"n":"韩国","v":"6"},{"n":"英国","v":"7"},{"n":"法国","v":"8"},{"n":"德国","v":"9"},{"n":"印度","v":"10"},{"n":"泰国","v":"11"},{"n":"丹麦","v":"12"},{"n":"瑞典","v":"13"},{"n":"巴西","v":"14"},{"n":"加拿大","v":"15"},{"n":"俄罗斯","v":"16"},{"n":"意大利","v":"17"},{"n":"比利时","v":"18"},{"n":"爱尔兰","v":"19"},{"n":"西班牙","v":"20"},{"n":"澳大利亚","v":"21"},{"n":"波兰","v":"22"},{"n":"土耳其","v":"23"},{"n":"越南","v":"24"}]}]
},
searchUrl: '/p/0/127.0.0.1:10072/https://www.cilixiong.com/e/search/index.php#classid=1,2&show=title&tempid=1&keyboard=**;post',
searchable:0,
quickSearch:0,
filterable:1,
headers:{
'User-Agent': 'MOBILE_UA'
},
timeout:5000,
class_name:'电影&剧集&豆瓣电影Top250&IMDB Top250&高分悬疑片&高分喜剧片&高分传记片&高分爱情片&高分犯罪片&高分恐怖片&高分冒险片&高分武侠片&高分奇幻片&高分历史片&高分战争片&高分歌舞片&高分灾难片&高分情色片&高分西部片&高分音乐片&高分科幻片&高分动作片&高分动画片&高分纪录片&冷门佳片',
class_url:'1&2&/top250/&/s/imdbtop250/&/s/suspense/&/s/comedy/&/s/biopic/&/s/romance/&/s/crime/&/s/horror/&/s/adventure/&/s/martial/&/s/fantasy/&/s/history/&/s/war/&/s/musical/&/s/disaster/&/s/erotic/&/s/west/&/s/music/&/s/sci-fi/&/s/action/&/s/animation/&/s/documentary/&/s/unpopular/',
play_parse:false,
lazy:'',
limit:6,
推荐: `js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
var d = [];
var html = request(input);
var list = pdfa(html, 'body&&.col');
list.forEach(it => {
d.push({
title: pdfh(it, 'h2&&Text'),
desc: pdfh(it, '.me-auto&&Text') + '分 / ' + pdfh(it, '.small&&Text'),
pic_url: pd(it, '.card-img&&style')
});
})
setResult(d);
`,
一级: `js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
var d = [];
if (MY_CATE !== '1' && MY_CATE !== '2') {
let turl = (MY_PAGE === 1)? 'index' : 'index_'+ MY_PAGE;
input = rule.homeUrl + MY_CATE + turl + '.html';
}
var html = request(input);
var list = pdfa(html, 'body&&.col');
list.forEach(it => {
d.push({
title: pdfh(it, 'h2&&Text'),
desc: pdfh(it, '.me-auto&&Text') + '分 / ' + pdfh(it, '.small&&Text'),
pic_url: pdfh(it, '.card-img&&style')
});
})
setResult(d);
`,
二级:'',
搜索:'',
}

593
js/cj.json Executable file
View File

@ -0,0 +1,593 @@
{
"ss": 1,
"api_site": [
{
"name": "TV-电影天堂资源",
"api": "http://caiji.dyttzyapi.com/api.php/provide/vod",
"detail": "http://caiji.dyttzyapi.com",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "TV-量子资源",
"api": "https://cj.lziapi.com/api.php/provide/vod",
"detail": "",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-1080资源",
"api": "https://api.1080zyku.com/inc/api_mac10.php",
"detail": "https://api.1080zyku.com",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "AV-155资源",
"api": "https://155api.com/api.php/provide/vod",
"detail": "https://155api.com",
"bz": "0",
"paichu": ""
},
{
"name": "TV-360资源",
"api": "https://360zy.com/api.php/provide/vod",
"detail": "https://360zy.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-天涯资源",
"api": "https://tyyszy.com/api.php/provide/vod",
"detail": "https://tyyszy.com",
"bz": "1",
"paichu": "20,39,45,50"
},
{
"name": "TV-暴风资源",
"api": "https://bfzyapi.com/api.php/provide/vod",
"detail": "",
"bz": "1",
"paichu": ""
},
{
"name": "TV-索尼-闪电资源",
"api": "https://xsd.sdzyapi.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "TV-索尼资源",
"api": "https://suoniapi.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "TV-红牛资源",
"api": "https://www.hongniuzy2.com/api.php/provide/vod",
"detail": "https://www.hongniuzy2.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-茅台资源",
"api": "https://caiji.maotaizy.cc/api.php/provide/vod",
"detail": "https://caiji.maotaizy.cc",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-虎牙资源",
"api": "https://www.huyaapi.com/api.php/provide/vod",
"detail": "https://www.huyaapi.com",
"bz": "0",
"paichu": "1,2,17"
},
{
"name": "TV-豆瓣资源",
"api": "https://caiji.dbzy.tv/api.php/provide/vod",
"detail": "https://caiji.dbzy.tv",
"bz": "1",
"paichu": "1,2,3,4,42,51,52"
},
{
"name": "TV-豆瓣资源2",
"api": "https://dbzy.tv/api.php/provide/vod",
"detail": "https://dbzy.tv",
"bz": "1",
"paichu": "1,2,3,4,42,51,52"
},
{
"name": "TV-豆瓣资源3",
"api": "https://caiji.dbzy5.com/api.php/provide/vod/from/dbm3u8/at/josn",
"detail": "https://dbzy.tv",
"bz": "1",
"paichu": "1,2,3,4,42,51,52"
},
{
"name": "TV-豪华资源",
"api": "https://hhzyapi.com/api.php/provide/vod",
"detail": "https://hhzyapi.com",
"bz": "1",
"paichu": "1,2,17,27"
},
{
"name": "TV-CK资源",
"api": "https://ckzy.me/api.php/provide/vod",
"detail": "https://ckzy.me",
"bz": "1",
"paichu": "21,39"
},
{
"name": "TV-U酷资源",
"api": "https://api.ukuapi.com/api.php/provide/vod",
"detail": "https://api.ukuapi.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-U酷资源2",
"api": "https://api.ukuapi88.com/api.php/provide/vod",
"detail": "https://api.ukuapi88.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-ikun资源",
"api": "https://ikunzyapi.com/api.php/provide/vod",
"detail": "https://ikunzyapi.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-wujinapi无尽",
"api": "https://api.wujinapi.cc/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": "1,2,3,4,5"
},
{
"name": "TV-丫丫点播",
"api": "https://cj.yayazy.net/api.php/provide/vod",
"detail": "https://cj.yayazy.net",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "TV-光速资源",
"api": "https://api.guangsuapi.com/api.php/provide/vod",
"detail": "https://api.guangsuapi.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-卧龙点播",
"api": "https://collect.wolongzyw.com/api.php/provide/vod",
"detail": "https://collect.wolongzyw.com",
"bz": "1",
"paichu": ""
},
{
"name": "TV-卧龙资源",
"api": "https://collect.wolongzy.cc/api.php/provide/vod",
"detail": "",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-卧龙资源2",
"api": "https://wolongzyw.com/api.php/provide/vod",
"detail": "https://wolongzyw.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-新浪点播",
"api": "https://api.xinlangapi.com/xinlangapi.php/provide/vod",
"detail": "https://api.xinlangapi.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-无尽资源",
"api": "https://api.wujinapi.com/api.php/provide/vod",
"detail": "",
"bz": "1",
"paichu": "1,2,3,4,5"
},
{
"name": "TV-无尽资源2",
"api": "https://api.wujinapi.me/api.php/provide/vod",
"detail": "",
"bz": "1",
"paichu": "1,2,3,4,5"
},
{
"name": "TV-无尽资源3",
"api": "https://api.wujinapi.net/api.php/provide/vod",
"detail": "",
"bz": "1",
"paichu": "1,2,3,4,5"
},
{
"name": "TV-旺旺短剧",
"api": "https://wwzy.tv/api.php/provide/vod",
"detail": "https://wwzy.tv",
"bz": "1",
"paichu": "2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18"
},
{
"name": "TV-旺旺资源",
"api": "https://api.wwzy.tv/api.php/provide/vod",
"detail": "https://api.wwzy.tv",
"bz": "1",
"paichu": "2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18"
},
{
"name": "TV-最大点播",
"api": "http://zuidazy.me/api.php/provide/vod",
"detail": "http://zuidazy.me",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-最大资源",
"api": "https://api.zuidapi.com/api.php/provide/vod",
"detail": "https://api.zuidapi.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-樱花资源",
"api": "https://m3u8.apiyhzy.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": "1,2,3,4,5"
},
{
"name": "TV-步步高资源",
"api": "https://api.yparse.com/api/json",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "TV-牛牛点播",
"api": "https://api.niuniuzy.me/api.php/provide/vod",
"detail": "https://api.niuniuzy.me",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "AV-gay资源",
"api": "https://gayapi.com/api.php/provide/vod/at/json",
"detail": "https://api.bwzyz.com",
"bz": "0",
"paichu": ""
},
{
"name": "TV-百度云资源",
"api": "https://api.apibdzy.com/api.php/provide/vod",
"detail": "https://api.apibdzy.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-神马云",
"api": "https://api.1080zyku.com/inc/apijson.php/",
"detail": "https://api.1080zyku.com",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-速博资源",
"api": "https://subocaiji.com/api.php/provide/vod",
"detail": "",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-金鹰点播",
"api": "https://jinyingzy.com/api.php/provide/vod",
"detail": "https://jinyingzy.com",
"bz": "1",
"paichu": "1,2,17,27"
},
{
"name": "TV-金鹰资源",
"api": "https://jyzyapi.com/api.php/provide/vod",
"detail": "https://jyzyapi.com",
"bz": "1",
"paichu": "1,2,17,27"
},
{
"name": "TV-閃電资源",
"api": "https://sdzyapi.com/api.php/provide/vod",
"detail": "https://sdzyapi.com",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "TV-非凡资源",
"api": "https://cj.ffzyapi.com/api.php/provide/vod",
"detail": "https://cj.ffzyapi.com",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "TV-飘零资源",
"api": "https://p2100.net/api.php/provide/vod",
"detail": "https://p2100.net",
"bz": "1",
"paichu": "1,2,3,4"
},
{
"name": "TV-魔爪资源",
"api": "https://mozhuazy.com/api.php/provide/vod",
"detail": "https://mozhuazy.com",
"bz": "1",
"paichu": "1,25,34,40"
},
{
"name": "TV-魔都动漫",
"api": "https://caiji.moduapi.cc/api.php/provide/vod",
"detail": "https://caiji.moduapi.cc",
"bz": "1",
"paichu": ""
},
{
"name": "TV-魔都资源",
"api": "https://www.mdzyapi.com/api.php/provide/vod",
"detail": "https://www.mdzyapi.com",
"bz": "1",
"paichu": ""
},
{
"name": "AV-91麻豆",
"api": "https://91md.me/api.php/provide/vod",
"detail": "https://91md.me",
"bz": "0",
"paichu": ""
},
{
"name": "AV-AIvin",
"api": "http://lbapiby.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "AV-JKUN资源",
"api": "https://jkunzyapi.com/api.php/provide/vod",
"detail": "https://jkunzyapi.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-souav资源",
"api": "https://api.souavzy.vip/api.php/provide/vod",
"detail": "https://api.souavzy.vip",
"bz": "0",
"paichu": ""
},
{
"name": "AV-乐播资源",
"api": "https://lbapi9.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "AV-奥斯卡资源",
"api": "https://aosikazy.com/api.php/provide/vod",
"detail": "https://aosikazy.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-奶香香",
"api": "https://Naixxzy.com/api.php/provide/vod",
"detail": "https://Naixxzy.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-森林资源",
"api": "https://slapibf.com/api.php/provide/vod",
"detail": "https://slapibf.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-淫水机资源",
"api": "https://www.xrbsp.com/api/json.php",
"detail": "https://www.xrbsp.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-玉兔资源",
"api": "https://apiyutu.com/api.php/provide/vod",
"detail": "https://apiyutu.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-番号资源",
"api": "http://fhapi9.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "AV-白嫖资源",
"api": "https://www.kxgav.com/api/json.php",
"detail": "https://www.kxgav.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-精品资源",
"api": "https://www.jingpinx.com/api.php/provide/vod",
"detail": "https://www.jingpinx.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-美少女资源",
"api": "https://www.msnii.com/api/json.php",
"detail": "https://www.msnii.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-老色逼资源",
"api": "https://apilsbzy1.com/api.php/provide/vod",
"detail": "https://apilsbzy1.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-色南国",
"api": "https://api.sexnguon.com/api.php/provide/vod",
"detail": "https://api.sexnguon.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-色猫资源",
"api": "https://api.maozyapi.com/inc/apijson_vod.php",
"detail": "https://api.maozyapi.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-辣椒资源",
"api": "https://apilj.com/api.php/provide/vod",
"detail": "https://apilj.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-香奶儿资源",
"api": "https://www.gdlsp.com/api/json.php",
"detail": "https://www.gdlsp.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-鲨鱼资源",
"api": "https://shayuapi.com/api.php/provide/vod",
"detail": "https://shayuapi.com",
"bz": "0",
"paichu": ""
},
{
"name": "AV-黄AV资源",
"api": "https://www.pgxdy.com/api/json.php",
"detail": "https://www.pgxdy.com",
"bz": "0",
"paichu": ""
},
{
"name": "TV-极速资源",
"api": "https://jszyapi.com/api.php/provide/vod",
"detail": "https://jszyapi.com",
"bz": "0",
"paichu": "1,2,17,27"
},
{
"name": "TV-魔爪资源",
"api": "https://mozhuazy.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": "1,25,34,40"
},
{
"name": "TV-魔都资源",
"api": "https://www.mdzyapi.com/api.php/provide/vod",
"bz": "0",
"detail": "",
"paichu": ""
},
{
"name": "杏吧资源",
"api": "https://xingba111.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "TV-量子资源",
"api": "https://cj.lziapi.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "森林资源",
"api": "https://slapibf.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "TV-红牛资源",
"api": "https://www.hongniuzy3.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": "1,2"
},
{
"name": "TV-鸭鸭资源",
"api": "https://cj.yayazy.net/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": "1,2,3,4"
},
{
"name": "TV-海洋资源",
"api": "http://www.seacms.org/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "黄色资源啊啊",
"api": "https://hsckzy888.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "小鸡资源",
"api": "https://api.xiaojizy.live/provide/vod",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "TV-新浪资源阿",
"api": "https://api.xinlangapi.com/xinlangapi.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": "1,2"
},
{
"name": "辣椒资源黄黄",
"api": "https://apilj.com/api.php/provide",
"detail": "",
"bz": "0",
"paichu": ""
},
{
"name": "细胞采集黄色",
"api": "https://www.xxibaozyw.com/api.php/provide/vod",
"detail": "",
"bz": "0",
"paichu": ""
}
]
}

174
js/ddys.js Executable file
View File

@ -0,0 +1,174 @@
var lists = `js:
log(TABS);
let d = [];
pdfh = jsp.pdfh;
pdfa = jsp.pdfa;
if (typeof play_url === "undefined") {
var play_url = ""
}
function getLists(html)
{
let src = pdfh(html, ".wp-playlist-script&&Html");
src = JSON.parse(src).tracks;
let list1 = [];
let list2 = [];
let url1 = "";
let url2 = "";
src.forEach(function(it) {
let src0 = it.src0;
let src1 = it.src1;
let title = it.caption;
url1 = "https://v.ddys.pro" + src0;
url2 = "https://ddys.pro/getvddr2/video?id=" + src1 + "&type=mix";
let zm = "https://ddys.pro/subddr/" + it.subsrc;
list1.push({
title: title,
url: url1,
desc: zm
});
list2.push({
title: title,
url: url2,
desc: zm
})
});
return {
list1: list1,
list2: list2
}
}
var data = getLists(html);
var list1 = data.list1;
var list2 = data.list2;
let nums = pdfa(html, "body&&.post-page-numbers");
nums.forEach
(function(it)
{
let num = pdfh(it, "body&&Text");
log(num);
let nurl = input + num + "/";
if (num == 1) {
return
}
log(nurl);
let html = request(nurl);
let data = getLists(html);
list1 = list1.concat(data.list1);
list2 = list2.concat(data.list2)
});
list1 = list1.map(function(item) {
return item.title + "$" + play_url + urlencode(item.url + "|" + input + "|" + item.desc)
});
list2 = list2.map(function(item) {
return item.title + "$" + play_url + urlencode(item.url + "|" + input + "|" + item.desc)
});
LISTS=[];
let dd = pdfa(html, 'div.wp-playlist~a');
dd.forEach(function(it){
let burl = pd(it, 'a&&href', HOST);
if (/(pan.quark.cn|www.aliyundrive.com|www.alipan.com)/.test(burl)){
let type="ali";
if (burl.includes("www.aliyundrive.com") || burl.includes("www.alipan.com")){
type = "ali";
}else if (burl.includes("pan.quark.cn")){
type = "quark";
}
LISTS.push([burl+ '$' + play_url + urlencode('http://127.0.0.1:9978/proxy?do='+type+'&type=push&url='+encodeURIComponent(burl)) + '||']);
}
});
LISTS = LISTS.concat([list1, list2]);
`;
var lazy = `js:
let purl = input.split("|")[0];
let referer = input.split("|")[1];
let zm = input.split("|")[2];
print("purl:" + purl);
print("referer:" + referer);
print("zm:" + zm);
if (/getvddr/.test(purl)) {
let html = request(purl, {
headers: {
Referer: HOST,
"User-Agent": MOBILE_UA
}
});
print(html);
try {
input = {jx:0,url:JSON.parse(html).url,parse:0} || {}
} catch (e) {
input = purl
}
} else {
input = {
jx: 0,
url: purl,
parse: 0,
header: JSON.stringify({
'user-agent': MOBILE_UA,
'referer': HOST
})
}
}
`;
// 网址发布页 https://ddys.site
// 网址发布页 https://ddys.wiki
var rule={
title:'ddys',
// host:'https://ddys.wiki',
// hostJs:'print(HOST);let html=request(HOST,{headers:{"User-Agent":MOBILE_UA}});HOST = jsp.pdfh(html,"a:eq(1)&&href")',
host:'https://ddys.pro',
// host:'https://ddys.mov',
url:'/fyclass/page/fypage/',
searchUrl:'/?s=**&post_type=post',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent':'MOBILE_UA',
},
class_parse:'#primary-menu li.menu-item;a&&Text;a&&href;\.pro/(.*)',
cate_exclude:'站长|^其他$|关于|^电影$|^剧集$|^类型$',
play_parse:true,
// lazy:'js:let purl=input.split("|")[0];let referer=input.split("|")[1];let zm=input.split("|")[2];print("purl:"+purl);print("referer:"+referer);print("zm:"+zm);let myua="okhttp/3.15";if(/ddrkey/.test(purl)){let ret=request(purl,{Referer:referer,withHeaders:true,"User-Agent":myua});log(ret);input=purl}else{let html=request(purl,{headers:{Referer:referer,"User-Agent":myua}});print(html);try{input=JSON.parse(html).url||{}}catch(e){input=purl}}',
lazy:lazy,
limit:6,
推荐:'*',
double:true, // 推荐内容是否双层定位
一级:'.post-box-list&&article;a:eq(-1)&&Text;.post-box-image&&style;a:eq(0)&&Text;a:eq(-1)&&href',
二级:{
"title":".post-title&&Text;.cat-links&&Text",
"img":".doulist-item&&img&&data-cfsrc",
"desc":".published&&Text",
"content":".abstract&&Text",
"tabs":`js:
TABS=[];
let d = pdfa(html, 'div.wp-playlist~a');
let tabsq=[];
d.forEach(function(it){
let burl = pd(it, 'a&&href', HOST);
if (burl.includes("pan.quark.cn")){
tabsq.push("夸克網盤");
}else if (burl.includes("www.aliyundrive.com") || burl.includes("www.alipan.com")){
tabsq.push("阿里雲盤");
}
});
if (tabsq.length == 1){
TABS=TABS.concat(tabsq);
}else{
let tmpIndex=1;
tabsq.forEach(function(it){
TABS.push(it+tmpIndex);
tmpIndex++;
});
}
TABS=TABS.concat(['国内(改Exo播放器)','国内2']);
`,
"lists":lists
},
搜索:'#main&&article;.post-title&&Text;;.published&&Text;a&&href'
}

142
js/dydhhy.js Executable file
View File

@ -0,0 +1,142 @@
var rule = {
title: 'dydhhy',
host: 'http://www.dydhhy.com',
homeUrl: '/',
url: '/tag/fyclass/page/fypage?',
filter_url: '{{fl.class}}',
filter: {},
searchUrl: '/?s=**',
searchable: 2,
quickSearch: 1,
filterable: 0,
headers: {
'User-Agent': 'MOBILE_UA',
'Cookie': ''
},
timeout: 5000,
class_name: '电视剧&电影&美剧&韩剧&日剧&英剧&2023&2022&2021',
class_url: 'tv&movie&美剧&韩剧&日剧&英剧&2023&2022&2021',
play_parse: true,
play_json: [{
re: '*',
json: {
parse: 0,
jx: 0
}
}],
lazy: '',
limit: 6,
推荐: 'div.clear:gt(1):has(img);.entry-title&&Text;img&&src;;a&&href',
一级: 'div.clear:gt(1):has(img);.entry-title&&Text;img&&src;;a&&href',
二级: {
title: ".single-excerpt&&Text",
img: "img&&src",
desc: ".entry-date&&Text",
content: "p&&Text",
tabs: `js: pdfh = jsp.pdfh;
pdfa = jsp.pdfa;
pd = jsp.pd;
TABS=[]
let d = pdfa(html, 'fieldset p a');
let tabsa = [];
let tabsq = [];
let tabsm = false;
let tabse = false;
d.forEach(function(it) {
let burl = pdfh(it, 'a&&href');
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
tabsa.push("阿里雲盤");
}else if (burl.startsWith("https://pan.quark.cn/s/")){
tabsq.push("夸克網盤");
}else if (burl.startsWith("magnet")){
tabsm = true;
}else if (burl.startsWith("ed2k")){
tabse = true;
}
});
if (tabsm === true){
TABS.push("磁力");
}
if (tabse === true){
TABS.push("電驢");
}
if (false && tabsa.length + tabsq.length > 1){
TABS.push("選擇右側綫路");
}
let tmpIndex;
tmpIndex=1;
tabsa.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
tmpIndex=1;
tabsq.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
log('xzys TABS >>>>>>>>>>>>>>>>>>' + TABS);`,
lists: `js: log(TABS);
pdfh = jsp.pdfh;
pdfa = jsp.pdfa;
pd = jsp.pd;
LISTS = [];
let d = pdfa(html, 'fieldset p a');
let lista = [];
let listq = [];
let listm = [];
let liste = [];
d.forEach(function(it){
let burl = pdfh(it, 'a&&href');
let title = pdfh(it, 'a&&Text');
log('dygang title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('dygang burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
let loopresult = title + '$' + burl;
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = "push://" + burl;
}
loopresult = title + '$' + burl;
lista.push(loopresult);
}else if (burl.startsWith("https://pan.quark.cn/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = "push://" + burl;
}
loopresult = title + '$' + burl;
listq.push(loopresult);
}else if (burl.startsWith("magnet")){
listm.push(loopresult);
}else if (burl.startsWith("ed2k")){
liste.push(loopresult);
}
});
if (listm.length>0){
LISTS.push(listm);
}
if (liste.length>0){
LISTS.push(liste);
}
if (false && lista.length + listq.length > 1){
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
lista.forEach(function(it){
LISTS.push([it]);
});
listq.forEach(function(it){
LISTS.push([it]);
});`,
}, 搜索: 'div.clear:gt(0):has(img);img&&alt;img&&data-src;;a&&href',
}

212
js/dygang.js Executable file
View File

@ -0,0 +1,212 @@
var rule = {
title:'电影港[磁]',
编码:'gb2312',
搜索编码:'gb2312',
host:'https://www.dygang.tv',
homeUrl:'/',
url: '/fyclass/index_fypage.htm?',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/e/search/index123.php#tempid=1&tbname=article&keyborad=**&show=title%2Csmalltext&Submit=%CB%D1%CB%F7;post',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': 'MOBILE_UA',
'Referer': 'https://www.dygang.tv/'
},
timeout:5000,
class_name:'最新电影&经典高清&国配电影&经典港片&国剧&日韩剧&美剧&综艺&动漫&纪录片&高清原盘&4K高清区&3D电影&电影专题',
class_url:'ys&bd&gy&gp&dsj&dsj1&yx&zy&dmq&jilupian&1080p&4K&3d&dyzt',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'div#tl tr:has(>td>table.border1>tbody>tr>td>a>img);table.border1 img&&alt;table.border1 img&&src;table:eq(2)&&Text;a&&href',
一级:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
let turl = (MY_PAGE === 1)? '/' : '/index_'+ MY_PAGE + '.htm';
input = rule.homeUrl + MY_CATE + turl;
let html = request(input);
let list = pdfa(html, 'tr:has(>td>table.border1)');
list.forEach(it => {
let title = pdfh(it, 'table.border1 img&&alt');
if (title!==""){
d.push({
title: title,
desc: pdfh(it, 'table:eq(1)&&Text'),
pic_url: pd(it, 'table.border1 img&&src', HOST),
url: pdfh(it, 'a&&href')
});
}
})
setResult(d);
`,
二级:{
title:"div.title a&&Text",
img:"#dede_content img&&src",
desc:"#dede_content&&Text",
content:"#dede_content&&Text",
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let d = pdfa(html, '#dede_content table tbody tr');
let tabsa = [];
let tabsq = [];
let tabsm = false;
let tabse = false;
let tabm3u8 = [];
d.forEach(function(it) {
let burl = pd(it, 'a&&href',HOST);
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/"){
tabsa.push("阿里雲盤");
}else if (burl.startsWith("https://pan.quark.cn/s/")){
tabsq.push("夸克網盤");
}else if (burl.startsWith("magnet")){
tabsm = true;
}else if (burl.startsWith("ed2k")){
tabse = true;
}
});
if (false){
d = pdfa(html, 'div:has(>div#post_content) div.widget:has(>h3)');
d.forEach(function(it) {
tabm3u8.push(pdfh(it, 'h3&&Text'));
});
}
if (tabsm === true){
TABS.push("磁力");
}
if (tabse === true){
TABS.push("電驢");
}
if (false && tabsa.length + tabsq.length > 1){
TABS.push("選擇右側綫路");
}
let tmpIndex;
tmpIndex=1;
tabsa.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
tmpIndex=1;
tabsq.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
tabm3u8.forEach(function(it){
TABS.push(it);
});
log('dygang TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
LISTS = [];
let d = pdfa(html, '#dede_content table tbody tr');
let lista = [];
let listq = [];
let listm = [];
let liste = [];
let listm3u8 = {};
d.forEach(function(it){
let burl = pd(it, 'a&&href',HOST);
let title = pdfh(it, 'a&&Text');
log('dygang title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('dygang burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
let loopresult = title + '$' + burl;
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/"){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
lista.push(loopresult);
}else if (burl.startsWith("https://pan.quark.cn/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
listq.push(loopresult);
}else if (burl.startsWith("magnet")){
listm.push(loopresult);
}else if (burl.startsWith("ed2k")){
liste.push(loopresult);
}
});
if (listm.length>0){
LISTS.push(listm);
}
if (liste.length>0){
LISTS.push(liste);
}
lista.forEach(function(it){
LISTS.push([it]);
});
listq.forEach(function(it){
LISTS.push([it]);
});
for ( const key in listm3u8 ){
if (listm3u8.hasOwnProperty(key)){
LISTS.push(listm3u8[key]);
}
};
`,
},
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let params = 'tempid=1&tbname=article&keyboard=' + KEY + '&show=title%2Csmalltext&Submit=%CB%D1%CB%F7';
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
let postData = {
method: "POST",
body: params
};
delete(_fetch_params.headers['Content-Type']);
Object.assign(_fetch_params, postData);
log("dygang search postData>>>>>>>>>>>>>>>" + JSON.stringify(_fetch_params));
let search_html = request( HOST + '/e/search/index123.php', _fetch_params, true);
//log("dygang search result>>>>>>>>>>>>>>>" + search_html);
let d=[];
let dlist = pdfa(search_html, 'table.border1');
dlist.forEach(function(it){
let title = pdfh(it, 'img&&alt');
if (searchObj.quick === true){
if (false && title.includes(KEY)){
title = KEY;
}
}
let img = pd(it, 'img&&src', HOST);
let content = pdfh(it, 'img&&alt');
let desc = pdfh(it, 'img&&alt');
let url = pd(it, 'a&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
})
});
setResult(d);
`,
}

86
js/funletu.js Executable file
View File

@ -0,0 +1,86 @@
var rule = {
title:'趣盘搜[夸]',
host:'https://v.funletu.com',
homeUrl:'/',
url: '/forum-fyclass-fypage.html?',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: 'json:/search#{"style":"get","datasrc":"search","query":{"id":"","datetime":"","commonid":1,"parmid":"","fileid":"","reportid":"","validid":"","searchtext":"**"},"page":{"pageSize":10,"pageIndex":1},"order":{"prop":"id","order":"desc"},"message":"请求资源列表数据"};postjson',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': PC_UA,
'Accept': '*/*',
'Referer': 'https://pan.funletu.com/'
},
timeout:5000,
class_name:'',
class_url:'',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'',
一级:'',
二级:`js:
VOD.vod_play_from = "夸克網盤";
VOD.vod_remarks = detailUrl;
VOD.vod_actor = "沒有二級,只有一級鏈接直接推送播放";
VOD.vod_content = MY_URL;
VOD.vod_play_url = "夸克網盤$" + detailUrl;
`,
搜索:`js:
let postJson = {
style:"get",
datasrc:"search",
query:{
id:"",
datetime:"",
commonid:1,
parmid:"",
fileid:"",
reportid:"",
validid:"",
searchtext: KEY
},
page:{ pageSize:20, pageIndex: MY_PAGE },
order:{prop:"id",order:"desc"},
message:"请求资源列表数据"
};
let postData = {
method: "POST",
body: postJson
};
log("funletu search postData1>>>>>>>>>>>>>>>" + JSON.stringify(postData));
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
Object.assign(_fetch_params, postData);
log("funletu search postData>>>>>>>>>>>>>>>" + JSON.stringify(_fetch_params));
let new_html=post(rule.homeUrl + 'search', _fetch_params);
//log("funletu search result>>>>>>>>>>>>>>>" + new_html);
let json=JSON.parse(new_html);
let d=[]
for(const it in json["data"]){
if (json.data.hasOwnProperty(it)){
log("funletu search it>>>>>>>>>>>>>>>" + JSON.stringify(json.data[it]));
if (json.data[it].valid === 0){
d.push({
title:json.data[it].title,
img:'',
content:json.data[it].updatetime,
desc:json.data[it].updatetime,
url:'push://'+json.data[it].url.split("?")[0]
});
}
}
}
setResult(d);
`,
}

48
js/huya.js Normal file

File diff suppressed because one or more lines are too long

499
js/index.config.js Executable file
View File

@ -0,0 +1,499 @@
var __defProp = Object.defineProperty;
var __getOwnPropDesc = Object.getOwnPropertyDescriptor;
var __getOwnPropNames = Object.getOwnPropertyNames;
var __hasOwnProp = Object.prototype.hasOwnProperty;
var __export = (target, all) => {
for (var name in all)
__defProp(target, name, { get: all[name], enumerable: true });
};
var __copyProps = (to, from, except, desc) => {
if (from && typeof from === "object" || typeof from === "function") {
for (let key of __getOwnPropNames(from))
if (!__hasOwnProp.call(to, key) && key !== except)
__defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable });
}
return to;
};
var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod);
// src/index.config.js
var index_config_exports = {};
__export(index_config_exports, {
default: () => index_config_default
});
module.exports = __toCommonJS(index_config_exports);
var index_config_default = {
kunyu77: {
testcfg: {
bbbb: "aaaaa"
}
},
commonConfig: {
panOrder: 'uc|p123|quark|ali|ty|115',
},
ali: {
thread: "4",
chunkSize: "400",
token: ""
},
quark: {
thread: "6",
chunkSize: "256",
//实际为256KB
cookie: ""
},
uc: {
cookie: "",
token: "",
ut:""
},
y115: {
cookie: ""
},
tyi: {
username: "",
password: ""
},
p123: {
username: "",
password: ""
},
xiaoya: {
url: "https://tvbox.omii.top/vod1/DixHtoGB"
},
yiso: {
url: "https://yiso.fun",
cookie: ""
},
bili: {
categories: "经典无损音乐合集#帕梅拉#太极拳#健身#舞蹈#音乐#歌曲#MV4K#演唱会4K#白噪音4K#知名UP主#说案#解说#演讲#时事#探索发现超清#纪录片超清#沙雕动画#沙雕穿越#沙雕#平面设计教学#软件教程#实用教程#旅游#风景4K#食谱#美食超清#搞笑#球星#动物世界超清#相声小品#戏曲#儿童#小姐姐4K#热门#旅行探险",
cookie: ""
},
tgsou: {
tgPic: false,
//每个频道返回数量
count: "4",
url: 'https://tgsou.651156.xyz',
channelUsername: "xx123pan,Q66Share,alyp_TV,ucpanpan,ucquark,tianyirigeng,shares_115,cloud189_group,tianyi_pd2,hao115,guaguale115,yunpanchat,ydypzyfx,tgsearchers,NewQuark,Mbox115,dianyingshare,XiangxiuNB,yunpanpan,kuakeyun,Quark_Movies,qixingzhenren,longzbija,alyp_4K_Movies,yunpanshare,shareAliyun,ikiviyyp,alyp_1",
},
wogg: {
url: 'http://woggpan.xxooo.cf',
},
tudou: {
url: "https://tudou.lvdoui.top"
},
wobg: {
url: "https://wobge.run.goorm.io/"
},
czzy: {
url: "https://cz01.vip"
},
hezi: {
url: "https://www.fygame.top/"
},
ttkx: {
url: "http://ttkx.live:7728/"
},
cm: {
url: "https://tv.yydsys.top"
},
libvio: {
url: "https://libvio.app/"
},
xxpan: {
url: "https://xpanpan.site"
},
m3u8cj: {
ykm3u8: [{
name: "360源",
url: "https://360zy.com/api.php/seaxml/vod/",
categories: [],
search: true
}],
doubanm3u8: [{
name: "豆瓣采集",
url: "https://caiji.dbzy.tv/api.php/provide/vod/from/dbm3u8/at/josn/",
categories: [],
search: true
}],
hmm3u8: [{
name: "黑木耳",
url: "https://json02.heimuer.xyz/api.php/provide/vod/",
categories: [],
search: true
}],
clm3u8: [{
name: "暴风",
url: "https://bfzyapi.com/api.php/provide/vod/",
categories: [],
search: true
}],
askm3u8: [{
name: "魔都",
url: "https://www.mdzyapi.com/api.php/provide/vod/?ac=list",
search: true
}],
sngm3u8: [{
name: "ikun",
url: "https://ikunzyapi.com/api.php/provide/vod/",
search: true
}],
ptm3u8: [{
name: "非凡",
url: "http://api.ffzyapi.com/api.php/provide/vod/",
search: true
}],
swm3u8: [{
name: "量子",
url: "https://cj.lziapi.com/api.php/provide/vod/",
categories: [],
search: true
}]
},
appys: {
ttmja: [{
name: "天天美剧",
url: "https://www.ttmja.com/api.php/app/",
// categories: ['国产剧', '香港剧', '韩国剧', '欧美剧', '台湾剧', '日本剧', '海外剧', '泰国剧', '短剧', '动作片', '喜剧片', '爱情片', '科幻片', '恐怖片', '剧情片', '战争片', '动漫片', '大陆综艺', '港台综艺', '日韩综艺', '欧美综艺', '国产动漫', '日韩动漫', '欧美动漫', '港台动漫', '海外动漫', '记录片'],
search: true
//搜索开关 true开 false关
}],
netfly: [{
name: "奈飞",
url: "http://www.netfly.tv/api.php/app/",
// categories: ['国产剧', '香港剧', '韩国剧', '欧美剧', '台湾剧', '日本剧', '海外剧', '泰国剧', '短剧', '动作片', '喜剧片', '爱情片', '科幻片', '恐怖片', '剧情片', '战争片', '动漫片', '大陆综艺', '港台综艺', '日韩综艺', '欧美综艺', '国产动漫', '日韩动漫', '欧美动漫', '港台动漫', '海外动漫', '记录片'],
search: true
//搜索开关 true开 false关
}]
},
alist: [
{
"name": "合集",
"server": "http://www.jczyl.top:5244/"
},
{
"name": "东哥",
"server": "http://101.34.67.237:5244/"
},
{
"name": "美云",
"server": "https://h.dfjx.ltd/"
},
{
"name": "小新",
"server": "https://pan.cdnxin.top/"
},
{
"name": "白云",
"server": "http://breadmyth.asuscomm.com:22222/"
},
{
"name": "小鸭",
"server": "http://www.214728327.xyz:5201/"
},
{
"name": "瑶瑶",
"server": "https://lyly.run.goorm.io/"
},
{
"name": "潇洒",
"server": "https://alist.azad.asia/"
},
{
"name": "鹏程",
"server": "https://pan.pengcheng.team/"
},
{
"name": "浅唱",
"server": "http://vtok.pp.ua/"
},
{
"name": "小丫",
"server": "http://alist.xiaoya.pro/"
},
{
"name": "触光",
"server": "https://pan.ichuguang.com"
},
{
"name": "弱水",
"server": "http://shicheng.wang:555/"
},
{
"name": "神器",
"server": "https://alist.ygxz.xyz/"
},
{
"name": "资源",
"server": "https://pan.ecve.cn/"
},
{
"name": "雨呢",
"server": "https://pan.clun.top/"
},
{
"name": "oeio",
"server": "https://o.oeio.repl.co/"
},
{
"name": "悦享",
"server": "https://nics.eu.org/"
},
{
"name": "分享",
"server": "https://ofoo.ml/"
},
{
"name": "PRO",
"server": "https://alist.prpr.run/"
},
{
"name": "多多",
"server": "https://pan.xwbeta.com"
},
{
"name": "小陈",
"server": "https://ypan.cc/"
},
{
"name": "只鱼",
"server": "https://alist.youte.ml"
},
{
"name": "七米",
"server": "https://al.chirmyram.com"
},
{
"name": "九帝",
"server": "https://alist.shenzjd.com"
},
{
"name": "白雪",
"server": "https://pan.jlbx.xyz"
},
{
"name": "星梦",
"server": "https://pan.bashroot.top"
},
{
"name": "repl",
"server": "https://ali.liucn.repl.co"
},
{
"name": "讯维",
"server": "https://pan.xwbeta.com"
}
],
color: [
{
light: {
bg: "https://img.omii.top/i/2024/03/28/mexspg.webp",
bgMask: "0x50ffffff",
primary: "0xff446732",
onPrimary: "0xffffffff",
primaryContainer: "0xffc5efab",
onPrimaryContainer: "0xff072100",
secondary: "0xff55624c",
onSecondary: "0xffffffff",
secondaryContainer: "0xffd9e7cb",
onSecondaryContainer: "0xff131f0d",
tertiary: "0xff386666",
onTertiary: "0xffffffff",
tertiaryContainer: "0xffbbebec",
onTertiaryContainer: "0xff002020",
error: "0xffba1a1a",
onError: "0xffffffff",
errorContainer: "0xffffdad6",
onErrorContainer: "0xff410002",
background: "0xfff8faf0",
onBackground: "0xff191d16",
surface: "0xfff8faf0",
onSurface: "0xff191d16",
surfaceVariant: "0xffe0e4d6",
onSurfaceVariant: "0xff191d16",
inverseSurface: "0xff2e312b",
inverseOnSurface: "0xfff0f2e7",
outline: "0xff74796d",
outlineVariant: "0xffc3c8bb",
shadow: "0xff000000",
scrim: "0xff000000",
inversePrimary: "0xffaad291",
surfaceTint: "0xff446732"
},
dark: {
bg: "https://img.omii.top/i/2024/03/28/mexyit.webp",
bgMask: "0x50000000",
primary: "0xffaad291",
onPrimary: "0xff173807",
primaryContainer: "0xff2d4f1c",
onPrimaryContainer: "0xffc5efab",
secondary: "0xffbdcbb0",
onSecondary: "0xff283420",
secondaryContainer: "0xff3e4a35",
onSecondaryContainer: "0xffd9e7cb",
tertiary: "0xffa0cfcf",
onTertiary: "0xff003738",
tertiaryContainer: "0xff1e4e4e",
onTertiaryContainer: "0xffbbebec",
error: "0xffffb4ab",
onError: "0xff690005",
errorContainer: "0xff93000a",
onErrorContainer: "0xffffdad6",
background: "0xff11140e",
onBackground: "0xffe1e4d9",
surface: "0xff11140e",
onSurface: "0xffe1e4d9",
surfaceVariant: "0xff43483e",
onSurfaceVariant: "0xffe1e4d9",
inverseSurface: "0xffe1e4d9",
inverseOnSurface: "0xff2e312b",
outline: "0xff8d9286",
outlineVariant: "0xff43483e",
shadow: "0xff000000",
scrim: "0xff000000",
inversePrimary: "0xff446732",
surfaceTint: "0xffaad291"
}
},
{
light: {
"bg": "https://img.omii.top/i/2024/03/27/oudroy-0.webp",
"bgMask": "0x50ffffff",
"primary": "0xFFA00B0B",
"onPrimary": "0xFFFFFFFF",
"primaryContainer": "0xFF333433",
"onPrimaryContainer": "0xFFBDC0B0",
"secondary": "0xFF55624C",
"onSecondary": "0xFFFFFFFF",
"secondaryContainer": "0xFFFFEBEE",
"onSecondaryContainer": "0xFFeb4d4b",
"tertiary": "0xFF663840",
"onTertiary": "0xFFFFFFFF",
"tertiaryContainer": "0xFFEBBBBE",
"onTertiaryContainer": "0xFF200006",
"error": "0xFFBA1A1A",
"onError": "0xFFFFFFFF",
"errorContainer": "0xFFFFDAD6",
"onErrorContainer": "0xFF410002",
"background": "0xFFFDFDF5",
"onBackground": "0xFFB94242",
"surface": "0xFFFDFDF5",
"onSurface": "0xFFB94242",
"surfaceVariant": "0xFFE4D6D8",
"onSurfaceVariant": "0xFFB94242",
"inverseSurface": "0xFF312C2C",
"onInverseSurface": "0xFFF1F1EA",
"outline": "0xFF74796D",
"outlineVariant": "0xFFC3C8BB",
"shadow": "0xFF000000",
"scrim": "0xFF000000",
"inversePrimary": "0xFFff7979",
"surfaceTint": "0xFFA00B0B"
},
dark: {
"bg": "https://img.omii.top/i/2024/01/25/xdiepq-0.webp",
"bgMask": "0x50000000",
"primary": "0xFFff7979",
"onPrimary": "0xFFA00B0B",
"primaryContainer": "0xFFeb4d4b",
"onPrimaryContainer": "0xFFFFCDD2",
"secondary": "0xFFBDCBAF",
"onSecondary": "0xFF342023",
"secondaryContainer": "0xFF4A3536",
"onSecondaryContainer": "0xFFE7CACE",
"tertiary": "0xFFA0CFCF",
"onTertiary": "0xFF003737",
"tertiaryContainer": "0xFF1E4E4E",
"onTertiaryContainer": "0xFFBBEBEB",
"error": "0xFFFFB4AB",
"errorContainer": "0xFF93000A",
"onError": "0xFF690005",
"onErrorContainer": "0xFFFFDAD6",
"background": "0xFF1C1818",
"onBackground": "0xFFE3E3DC",
"outline": "0xFF92868B",
"onInverseSurface": "0xFF1C1818",
"inverseSurface": "0xFFE3DCE1",
"inversePrimary": "0xFFeb4d4b",
"shadow": "0xFF000000",
"surfaceTint": "0xFFDA607D",
"outlineVariant": "0xFF483E41",
"scrim": "0xFF000000",
"surface": "0xFF1C1818",
"onSurface": "0xFFC7C7C0",
"surfaceVariant": "0xFF43483E",
"onSurfaceVariant": "0xFFC7C7C0"
}
},
{
light: {
bg: "",
bgMask: "0x50ffffff",
primary: "0xFF2B6C00",
onPrimary: "0xFFFFFFFF",
primaryContainer: "0xFFA6F779",
onPrimaryContainer: "0xFF082100",
secondary: "0xFF55624C",
onSecondary: "0xFFFFFFFF",
secondaryContainer: "0xFFD9E7CA",
onSecondaryContainer: "0xFF131F0D",
tertiary: "0xFF386666",
onTertiary: "0xFFFFFFFF",
tertiaryContainer: "0xFFBBEBEB",
onTertiaryContainer: "0xFF002020",
error: "0xFFBA1A1A",
onError: "0xFFFFFFFF",
errorContainer: "0xFFFFDAD6",
onErrorContainer: "0xFF410002",
background: "0xFFFDFDF5",
onBackground: "0xFF1A1C18",
surface: "0xFFFDFDF5",
onSurface: "0xFF1A1C18",
surfaceVariant: "0xFFE0E4D6",
onSurfaceVariant: "0xFF1A1C18",
inverseSurface: "0xFF2F312C",
onInverseSurface: "0xFFF1F1EA",
outline: "0xFF74796D",
outlineVariant: "0xFFC3C8BB",
shadow: "0xFF000000",
scrim: "0xFF000000",
inversePrimary: "0xFF8CDA60",
surfaceTint: "0xFF2B6C00"
},
dark: {
bg: "",
bgMask: "0x50000000",
primary: "0xFF8CDA60",
onPrimary: "0xFF133800",
primaryContainer: "0xFF1F5100",
onPrimaryContainer: "0xFFA6F779",
secondary: "0xFFBDCBAF",
onSecondary: "0xFF283420",
secondaryContainer: "0xFF3E4A35",
onSecondaryContainer: "0xFFD9E7CA",
tertiary: "0xFFA0CFCF",
onTertiary: "0xFF003737",
tertiaryContainer: "0xFF1E4E4E",
onTertiaryContainer: "0xFFBBEBEB",
error: "0xFFFFB4AB",
errorContainer: "0xFF93000A",
onError: "0xFF690005",
onErrorContainer: "0xFFFFDAD6",
background: "0xFF1A1C18",
onBackground: "0xFFE3E3DC",
outline: "0xFF8D9286",
onInverseSurface: "0xFF1A1C18",
inverseSurface: "0xFFE3E3DC",
inversePrimary: "0xFF2B6C00",
shadow: "0xFF000000",
surfaceTint: "0xFF8CDA60",
outlineVariant: "0xFF43483E",
scrim: "0xFF000000",
surface: "0xFF1A1C18",
onSurface: "0xFFC7C7C0",
surfaceVariant: "0xFF43483E",
onSurfaceVariant: "0xFFC7C7C0"
}
}
]
};

1
js/index.config.js.md5 Executable file
View File

@ -0,0 +1 @@
cf192982bce5ec2dc9b801a3209a3619

220
js/index.css Executable file
View File

@ -0,0 +1,220 @@
* {
margin: 0;
padding: 0;
box-sizing: border-box;
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif
}
body {
line-height: 1.5;
min-height: 100vh;
display: flex;
justify-content: center;
align-items: center;
padding: 15px
}
.container {
width: 60%;
background-color: #f0f0f0;
padding: 30px;
border-radius: 15px;
transition: all 0.4s ease
}
header {
text-align: center;
margin-bottom: 20px
}
h1 {
font-size: 2.2rem;
margin-bottom: 10px;
position: relative;
padding-bottom: 10px
}
h1::after {
content: '';
position: absolute;
bottom: 0;
left: 50%;
transform: translateX(-50%);
width: 80px;
height: 3px;
border-radius: 2px
}
.subtitle {
font-size: 1.1rem;
max-width: 600px;
margin: 0 auto
}
.content {
display: flex;
gap: 20px;
margin-bottom: 20px
}
.main-content {
flex: 3
}
.sidebar {
flex: 1;
padding: 15px;
border-radius: 8px
}
p {
margin-bottom: 15px;
font-size: 1rem;
text-align: justify
}
ul {
padding-left: 18px;
margin: 15px 0
}
ul li {
line-height: 1.8;
margin-bottom: 8px;
position: relative;
padding-left: 0
}
a {
color: #20a53a;
text-decoration: none;
font-weight: 600;
transition: all 0.3s ease;
border-bottom: 2px solid transparent
}
.device-info {
padding: 12px;
border-radius: 8px;
text-align: center;
font-weight: 600;
margin-top: 15px;
display: flex;
align-items: center;
justify-content: center;
gap: 8px
}
.device-info i {
font-size: 1.3rem
}
.features {
display: flex;
gap: 15px;
margin-top: 20px
}
.feature-card {
padding: 20px;
border-radius: 8px;
text-align: center;
flex: 1;
transition: transform 0.3s ease
}
.feature-card:hover {
transform: translateY(-3px)
}
.feature-card i {
font-size: 2.2rem;
margin-bottom: 12px
}
@media (max-width:1024px) {
.container {
width: 90%;
padding: 25px
}
h1 {
font-size: 1.9rem
}
.content {
flex-direction: column;
gap: 15px
}
.features {
flex-wrap: wrap
}
.feature-card {
flex: 1 1 45%;
padding: 18px
}
}
@media (max-width:768px) {
body {
padding: 8px
}
.container {
padding: 20px 15px
}
h1 {
font-size: 1.7rem;
padding-bottom: 8px
}
.subtitle {
font-size: 0.95rem
}
ul li {
line-height: 1.6
}
.features {
flex-direction: column;
gap: 12px
}
.device-info {
padding: 10px;
font-size: 0.9rem
}
}
@media (max-width:640px) {
body {
background-color: #f0f0f0
}
.container {
width: auto;
margin: 0
}
}
@media (max-width:480px) {
.container {
padding: 18px 12px
}
h1 {
font-size: 1.5rem
}
ul li {
font-size: 0.92rem
}
.feature-card {
padding: 15px
}
}

702
js/index.js Executable file

File diff suppressed because one or more lines are too long

1
js/index.js.md5 Executable file
View File

@ -0,0 +1 @@
67840d11eb2045ee0f40615ef2b0e0bb

62
js/index.min.js vendored Executable file
View File

@ -0,0 +1,62 @@
const currentDomain = window.location.origin;
let isShowingIP = false;
async function dl(fileUrl, filename) {
try {
const response = await fetch(fileUrl);
if (!response.ok)
throw new Error(`HTTP 错误: ${response.status}`);
const blob = await response.blob();
const url = URL.createObjectURL(blob);
const link = document.createElement('a');
link.href = url;
link.download = filename;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
setTimeout(() => URL.revokeObjectURL(url), 100);
} catch (error) {
console.error('下载错误:', error);
}
}
async function getip() {
try {
const response = await fetch("/cdn-cgi/trace");
if (response.ok) {
const data = await response.text();
const lines = data.split("\n");
const info = {};
lines.forEach((line) => {
const parts = line.split("=");
if (parts.length === 2) {
info[parts[0]] = parts[1];
}
});
const displayText = `访客:${info.loc} | ${info.http} | IP:${info.ip} | 节点:${info.colo} | 加密:${info.tls}`;
return textContent = displayText;
}
} catch (error) {
console.error("获取失败: ", error);
return "显示失败";
}
}
$(document).ready(function () {
originalText = $("#cfs").text();
$("#cfs").click(async function () {
if (!isShowingIP) {
const ip = await getip();
$(this).text(`${ip}`);
} else {
$(this).text(originalText);
}
isShowingIP = !isShowingIP;
});
var t1 = performance.now();
$("#time").text("页面加载耗时 " + Math.round(t1) + " 毫秒");
});

230
js/jiyingw.js Executable file
View File

@ -0,0 +1,230 @@
var rule = {
title:'极影网[磁]',
host:'https://www.jiyingw.net',
homeUrl:'/',
url: '/fyclass/page/fypage?',
//host:'http://127.0.0.1:10079',
//homeUrl:'/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.jiyingw.net',
//url: '/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.jiyingw.net/fyclass/page/fypage?',
filter_url:'{{fl.class}}',
filter:{
"movie":[{"key":"class","name":"标签","value":[{"n":"全部","v":"movie"},{"n":"4k","v":"tag/4k"}, {"n":"人性","v":"tag/人性"}, {"n":"传记","v":"tag/chuanji"}, {"n":"儿童","v":"tag/儿童"}, {"n":"冒险","v":"tag/adventure"}, {"n":"剧情","v":"tag/剧情"}, {"n":"加拿大","v":"tag/加拿大"}, {"n":"动作","v":"tag/dongzuo"}, {"n":"动漫","v":"tag/动漫"}, {"n":"励志","v":"tag/励志"}, {"n":"历史","v":"tag/history"}, {"n":"古装","v":"tag/古装"}, {"n":"同性","v":"tag/gay"}, {"n":"喜剧","v":"tag/comedy"}, {"n":"国剧","v":"tag/国剧"}, {"n":"奇幻","v":"tag/qihuan"}, {"n":"女性","v":"tag/女性"}, {"n":"家庭","v":"tag/family"}, {"n":"德国","v":"tag/德国"}, {"n":"恐怖","v":"tag/kongbu"}, {"n":"悬疑","v":"tag/xuanyi"}, {"n":"惊悚","v":"tag/jingsong"}, {"n":"意大利","v":"tag/意大利"}, {"n":"战争","v":"tag/zhanzheng"}, {"n":"战斗","v":"tag/战斗"}, {"n":"搞笑","v":"tag/搞笑"}, {"n":"故事","v":"tag/故事"}, {"n":"文艺","v":"tag/文艺"}, {"n":"日常","v":"tag/日常"}, {"n":"日本","v":"tag/日本"}, {"n":"日语","v":"tag/日语"}, {"n":"校园","v":"tag/校园"}, {"n":"武侠","v":"tag/wuxia"}, {"n":"法国","v":"tag/法国"}, {"n":"游戏","v":"tag/游戏"}, {"n":"灾难","v":"tag/zainan"}, {"n":"爱情","v":"tag/爱情"}, {"n":"犯罪","v":"tag/crime"}, {"n":"真人秀","v":"tag/zhenrenxiu"}, {"n":"短片","v":"tag/duanpian"}, {"n":"科幻","v":"tag/kehuan"}, {"n":"纪录","v":"tag/jilu"}, {"n":"美剧","v":"tag/meiju"}, {"n":"舞台","v":"tag/stage"}, {"n":"西部","v":"tag/xibu"}, {"n":"运动","v":"tag/yundong"}, {"n":"韩剧","v":"tag/韩剧"}, {"n":"韩国","v":"tag/韩国"}, {"n":"音乐","v":"tag/yinyue"}, {"n":"高清电影","v":"tag/高清电影"}]}]
},
searchUrl: '/?s=**',
searchable:2,
quickSearch:0,
filterable:1,
headers:{
'User-Agent': 'PC_UA',
'Cookie':'http://127.0.0.1:9978/file:///tvbox/JS/lib/jiyingw.txt',
'Accept':'*/*',
'Referer': 'https://www.jiyingw.net/'
},
timeout:5000,
class_name:'电影&电视剧&动漫&综艺&影评',
class_url:'movie&tv&cartoon&movie/variety&yingping',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'ul#post_container li;a&&title;img&&src;.article entry_post&&Text;a&&href',
一级:'ul#post_container li;a&&title;img&&src;.article entry_post&&Text;a&&href',
二级:{
title:"h1&&Text",
img:"#post_content img&&src",
desc:"#post_content&&Text",
content:"#post_content&&Text",
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let tabsa = [];
let tabsq = [];
let tabsm = false;
let tabse = false;
let d = pdfa(html, '#post_content p a');
d.forEach(function(it) {
let burl = pdfh(it, 'a&&href');
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
tabsa.push("阿里雲盤");
}else if (burl.startsWith("https://pan.quark.cn/s/")){
tabsq.push("夸克網盤");
}else if (burl.startsWith("magnet")){
tabsm = true;
}else if (burl.startsWith("ed2k")){
tabse = true;
}
});
d = pdfa(html, 'div#down p.down-list3 a');
d.forEach(function(it) {
let burl = pdfh(it, 'a&&href');
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
tabsa.push("阿里雲盤");
}else if (burl.startsWith("https://pan.quark.cn/s/")){
tabsq.push("夸克網盤");
}else if (burl.startsWith("magnet")){
tabsm = true;
}else if (burl.startsWith("ed2k")){
tabse = true;
}
});
if (tabsm === true){
TABS.push("磁力");
}
if (tabse === true){
TABS.push("電驢");
}
if (false && tabsa.length + tabsq.length > 1){
TABS.push("選擇右側綫路");
}
let tmpIndex;
tmpIndex=1;
tabsa.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
tmpIndex=1;
tabsq.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
log('jiyingw TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
LISTS = [];
let lista = [];
let listq = [];
let listm = [];
let liste = [];
let d = pdfa(html, '#post_content p a');
d.forEach(function(it){
let burl = pdfh(it, 'a&&href');
let title = pdfh(it, 'a&&Text');
log('dygang title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('dygang burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
let loopresult = title + '$' + burl;
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
lista.push(loopresult);
}else if (burl.startsWith("https://pan.quark.cn/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
listq.push(loopresult);
}else if (burl.startsWith("magnet")){
listm.push(loopresult);
}else if (burl.startsWith("ed2k")){
liste.push(loopresult);
}
});
d = pdfa(html, 'div#down p.down-list3 a');
d.forEach(function(it){
let burl = pdfh(it, 'a&&href');
let title = pdfh(it, 'a&&Text');
log('dygang title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('dygang burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
let loopresult = title + '$' + burl;
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
lista.push(loopresult);
}else if (burl.startsWith("https://pan.quark.cn/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
listq.push(loopresult);
}else if (burl.startsWith("magnet")){
listm.push(loopresult);
}else if (burl.startsWith("ed2k")){
liste.push(loopresult);
}
});
if (listm.length>0){
LISTS.push(listm);
}
if (liste.length>0){
LISTS.push(liste);
}
if (false && lista.length + listq.length > 1){
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
lista.forEach(function(it){
LISTS.push([it]);
});
listq.forEach(function(it){
LISTS.push([it]);
});
`,
},
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
if (rule_fetch_params.headers.Cookie.startsWith("http")){
rule_fetch_params.headers.Cookie=fetch(rule_fetch_params.headers.Cookie);
let cookie = rule_fetch_params.headers.Cookie;
setItem(RULE_CK, cookie);
};
log('jiyingw search cookie>>>>>>>>>>>>>>>' + rule_fetch_params.headers.Cookie);
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
let search_html=request(rule.homeUrl + '?s=' + encodeURIComponent(KEY), _fetch_params);
let d=[];
let dlist = pdfa(search_html, 'h2');
log("jiyingw dlist.length>>>>>>>"+dlist.length);
dlist.forEach(function(it){
let title = pdfh(it, 'a&&title');
//if (searchObj.quick === true){
// title = KEY;
//}
let img = '';
let content = title;
let desc = title;
let url = pd(it, 'a&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
})
});
setResult(d);
`,
}

230
js/jiyingwp.js Executable file
View File

@ -0,0 +1,230 @@
var rule = {
title:'极影网[磁]',
//host:'https://www.jiyingw.net',
//homeUrl:'/',
//url: '/fyclass/page/fypage?',
host:'http://127.0.0.1:10079',
homeUrl:'/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.jiyingw.net/',
url: '/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.jiyingw.net/fyclass/page/fypage?',
filter_url:'{{fl.class}}',
filter:{
"movie":[{"key":"class","name":"标签","value":[{"n":"全部","v":"movie"},{"n":"4k","v":"tag/4k"}, {"n":"人性","v":"tag/人性"}, {"n":"传记","v":"tag/chuanji"}, {"n":"儿童","v":"tag/儿童"}, {"n":"冒险","v":"tag/adventure"}, {"n":"剧情","v":"tag/剧情"}, {"n":"加拿大","v":"tag/加拿大"}, {"n":"动作","v":"tag/dongzuo"}, {"n":"动漫","v":"tag/动漫"}, {"n":"励志","v":"tag/励志"}, {"n":"历史","v":"tag/history"}, {"n":"古装","v":"tag/古装"}, {"n":"同性","v":"tag/gay"}, {"n":"喜剧","v":"tag/comedy"}, {"n":"国剧","v":"tag/国剧"}, {"n":"奇幻","v":"tag/qihuan"}, {"n":"女性","v":"tag/女性"}, {"n":"家庭","v":"tag/family"}, {"n":"德国","v":"tag/德国"}, {"n":"恐怖","v":"tag/kongbu"}, {"n":"悬疑","v":"tag/xuanyi"}, {"n":"惊悚","v":"tag/jingsong"}, {"n":"意大利","v":"tag/意大利"}, {"n":"战争","v":"tag/zhanzheng"}, {"n":"战斗","v":"tag/战斗"}, {"n":"搞笑","v":"tag/搞笑"}, {"n":"故事","v":"tag/故事"}, {"n":"文艺","v":"tag/文艺"}, {"n":"日常","v":"tag/日常"}, {"n":"日本","v":"tag/日本"}, {"n":"日语","v":"tag/日语"}, {"n":"校园","v":"tag/校园"}, {"n":"武侠","v":"tag/wuxia"}, {"n":"法国","v":"tag/法国"}, {"n":"游戏","v":"tag/游戏"}, {"n":"灾难","v":"tag/zainan"}, {"n":"爱情","v":"tag/爱情"}, {"n":"犯罪","v":"tag/crime"}, {"n":"真人秀","v":"tag/zhenrenxiu"}, {"n":"短片","v":"tag/duanpian"}, {"n":"科幻","v":"tag/kehuan"}, {"n":"纪录","v":"tag/jilu"}, {"n":"美剧","v":"tag/meiju"}, {"n":"舞台","v":"tag/stage"}, {"n":"西部","v":"tag/xibu"}, {"n":"运动","v":"tag/yundong"}, {"n":"韩剧","v":"tag/韩剧"}, {"n":"韩国","v":"tag/韩国"}, {"n":"音乐","v":"tag/yinyue"}, {"n":"高清电影","v":"tag/高清电影"}]}]
},
searchUrl: '/?s=**',
searchable:2,
quickSearch:0,
filterable:1,
headers:{
'User-Agent': 'PC_UA',
'Cookie':'http://127.0.0.1:9978/file:///tvbox/JS/lib/jiyingw.txt',
'Accept':'*/*',
'Referer': 'https://www.jiyingw.net/'
},
timeout:5000,
class_name:'电影&电视剧&动漫&综艺&影评',
class_url:'movie&tv&cartoon&movie/variety&yingping',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'ul#post_container li;a&&title;img&&src;.article entry_post&&Text;a&&href',
一级:'ul#post_container li;a&&title;img&&src;.article entry_post&&Text;a&&href',
二级:{
title:"h1&&Text",
img:"#post_content img&&src",
desc:"#post_content&&Text",
content:"#post_content&&Text",
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let tabsa = [];
let tabsq = [];
let tabsm = false;
let tabse = false;
let d = pdfa(html, '#post_content p a');
d.forEach(function(it) {
let burl = pdfh(it, 'a&&href');
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
tabsa.push("阿里雲盤");
}else if (burl.startsWith("https://pan.quark.cn/s/")){
tabsq.push("夸克網盤");
}else if (burl.startsWith("magnet")){
tabsm = true;
}else if (burl.startsWith("ed2k")){
tabse = true;
}
});
d = pdfa(html, 'div#down p.down-list3 a');
d.forEach(function(it) {
let burl = pdfh(it, 'a&&href');
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
tabsa.push("阿里雲盤");
}else if (burl.startsWith("https://pan.quark.cn/s/")){
tabsq.push("夸克網盤");
}else if (burl.startsWith("magnet")){
tabsm = true;
}else if (burl.startsWith("ed2k")){
tabse = true;
}
});
if (tabsm === true){
TABS.push("磁力");
}
if (tabse === true){
TABS.push("電驢");
}
if (false && tabsa.length + tabsq.length > 1){
TABS.push("選擇右側綫路");
}
let tmpIndex;
tmpIndex=1;
tabsa.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
tmpIndex=1;
tabsq.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
log('jiyingw TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
LISTS = [];
let lista = [];
let listq = [];
let listm = [];
let liste = [];
let d = pdfa(html, '#post_content p a');
d.forEach(function(it){
let burl = pdfh(it, 'a&&href');
let title = pdfh(it, 'a&&Text');
log('dygang title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('dygang burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
let loopresult = title + '$' + burl;
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
lista.push(loopresult);
}else if (burl.startsWith("https://pan.quark.cn/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
listq.push(loopresult);
}else if (burl.startsWith("magnet")){
listm.push(loopresult);
}else if (burl.startsWith("ed2k")){
liste.push(loopresult);
}
});
d = pdfa(html, 'div#down p.down-list3 a');
d.forEach(function(it){
let burl = pdfh(it, 'a&&href');
let title = pdfh(it, 'a&&Text');
log('dygang title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('dygang burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
let loopresult = title + '$' + burl;
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
lista.push(loopresult);
}else if (burl.startsWith("https://pan.quark.cn/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = 'push://' + burl;
}
loopresult = title + '$' + burl;
listq.push(loopresult);
}else if (burl.startsWith("magnet")){
listm.push(loopresult);
}else if (burl.startsWith("ed2k")){
liste.push(loopresult);
}
});
if (listm.length>0){
LISTS.push(listm);
}
if (liste.length>0){
LISTS.push(liste);
}
if (false && lista.length + listq.length > 1){
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
lista.forEach(function(it){
LISTS.push([it]);
});
listq.forEach(function(it){
LISTS.push([it]);
});
`,
},
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
if (rule_fetch_params.headers.Cookie.startsWith("http")){
rule_fetch_params.headers.Cookie=fetch(rule_fetch_params.headers.Cookie);
let cookie = rule_fetch_params.headers.Cookie;
setItem(RULE_CK, cookie);
};
log('jiyingw search cookie>>>>>>>>>>>>>>>' + rule_fetch_params.headers.Cookie);
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
let search_html=request(rule.homeUrl + '?s=' + encodeURIComponent(KEY), _fetch_params);
let d=[];
let dlist = pdfa(search_html, 'h2');
log("jiyingw dlist.length>>>>>>>"+dlist.length);
dlist.forEach(function(it){
let title = pdfh(it, 'a&&title');
//if (searchObj.quick === true){
// title = KEY;
//}
let img = '';
let content = title;
let desc = title;
let url = pd(it, 'a&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
})
});
setResult(d);
`,
}

4
js/jquery-1.11.0.min.js vendored Executable file

File diff suppressed because one or more lines are too long

2
js/jquery.3.3.1.min.js vendored Executable file

File diff suppressed because one or more lines are too long

2
js/jquery.3.6.4.min.js vendored Executable file

File diff suppressed because one or more lines are too long

4
js/jquery.min.js vendored Executable file

File diff suppressed because one or more lines are too long

162
js/kkpans.js Executable file
View File

@ -0,0 +1,162 @@
var rule = {
title:'KK網盤[磁]',
host:'https://www.kkpans.com',
homeUrl:'/',
url: '/forum-fyclass-fypage.html?',
//host:'http://192.168.101.1:10078',
//homeUrl:'/p/0/s/https://www.kkpans.com/',
//url: '/p/0/s/https://www.kkpans.com/forum-fyclass-fypage.html?',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/search',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': 'Mozilla/5.0 (Linux; Android 10; SM-G981B) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.162 Mobile Safari/537.36',
'Accept': '*/*',
'Referer': 'https://www.kkpans.com/'
},
timeout:5000,
class_name:'国外电影&国外电视剧&纪录片资源&综艺资源&动漫资源&音乐资源',
class_url:'39&40&41&42&46&43',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'',
一级:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
log("kkpans input>>>>>>>>>>>>>>"+input);
let html = request(input);
//log("kkpans 1level html>>>>>>>>>>>>>>"+html);
let list = pdfa(html, 'div.threadlist ul li.list');
list.forEach(function(it) {
d.push({
title: pdfh(it, 'div.threadlist_tit&&Text'),
desc: pdfh(it, 'div.threadlist_top div:has(>h3) span&&Text'),
pic_url: '',
url: pd(it, 'li.list&&a[href^="forum.php"]:eq(1)&&href', HOST)
});
})
setResult(d);
`,
二级:{
title:"div.viewthread&&div.view_tit&&Text",
img:"div.viewthread div.message&&img&&src",
desc:"div.viewthread div.message&&Text",
content:"div.viewthread div.message&&Text",
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let d = pdfa(html, 'div.viewthread div.message a[href^="https://pan.quark.cn/s/"]');
let index = 1;
if (false && d.length>1){
TABS.push("選擇右側綫路");
}
d.forEach(function(it) {
TABS.push("夸克網盤" + index);
index = index + 1;
});
log('meijumi TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
LISTS=[];
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = pdfa(html, 'div.viewthread div.message a[href^="https://pan.quark.cn/s/"]');
let index = 1;
if (false && d.length>1){
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
d.forEach(function(it) {
let burl = pdfh(it, 'a&&href');
if (true){
if (d.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = "push://" + burl;
}
let title = pdfh(it, 'a&&Text');
LISTS.push([title + '$' + burl]);
});
`,
},
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let withHeaders = {
withHeaders: true
};
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
Object.assign(_fetch_params, withHeaders);
let new_html=request(rule.homeUrl + 'search.php?mod=forum', _fetch_params);
log('kkpans search new_html >>>>>>>>>>>>>>>>>>>>>' + new_html);
let json=JSON.parse(new_html);
let setCk=Object.keys(json).find(it=>it.toLowerCase()==="set-cookie");
let cookie="";
if (typeof setCk !== "undefined"){
let d=[];
for(const key in json[setCk]){
if (typeof json[setCk][key] === "string"){
log("kkpans header setCk key>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" + json[setCk][key] + " " + (typeof json[setCk][key]));
d.push(json[setCk][key].split(";")[0]);
}
}
cookie=d.join(";");
}
fetch_params.headers.Cookie=cookie;
rule_fetch_params.headers.Cookie=cookie;
log('kkpans search cookie >>>>>>>>>>>>>>>>>>>>>' + cookie);
//log('kkpans search body >>>>>>>>>>>>>>>>>>>>>' + json['body']);
new_html = json['body'];
let formhash = pdfh(new_html, 'input[name="formhash"]&&value');
log("kkpans formhash>>>>>>>>>>>>>>>" + formhash);
let params = 'formhash=' + formhash + '&searchsubmit=yes&srchtxt=' + encodeURIComponent(KEY);
_fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
let postData = {
body: params
};
Object.assign(_fetch_params, postData);
log("kkpans search postData>>>>>>>>>>>>>>>" + JSON.stringify(_fetch_params));
let search_html = post(rule.homeUrl + 'search.php?mod=forum', _fetch_params)
//log("kkpans search result>>>>>>>>>>>>>>>" + search_html);
let d=[];
let dlist = pdfa(search_html, 'div.threadlist ul li.list');
dlist.forEach(function(it){
let title = pdfh(it, 'div.threadlist_tit&&Text');
if (searchObj.quick === true){
if (title.includes(KEY)){
title = KEY;
}
}
let img = "";
let content = pdfh(it, 'div.threadlist_top div:has(>h3) span&&Text');
let desc = pdfh(it, 'div.threadlist_top div:has(>h3) span&&Text');
let url = pd(it, 'a[href^="forum.php?mod=viewthread"]&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
})
});
setResult(d);
`,
}

188
js/kuba.js Executable file
View File

@ -0,0 +1,188 @@
var rule = {
title:'酷吧[磁]',
host:'https://www.kuba222.com',
homeUrl: '/',
url: '/vodtypehtml/fyclass.html?',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/search/**-1.html',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': 'PC_UA',
'Referer': 'https://www.kuba222.com/'
},
timeout:5000,
class_name: '最新&4K&电影&动作片&喜剧片&爱情片&科幻片&恐怖片&剧情片&战争片&微电影&电视剧&动漫&纪录片',
class_url: 'new&4K&1&5&6&7&8&9&10&11&21&31&4&16',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
let html = request(input);
let list = pdfa(html, 'ul.stui-vodlist li');
list.forEach(function (it){
d.push({
title: pdfh(it, 'a&&title'),
desc: pdfh(it, 'li&&div&&a&&span&&Text'),
pic_url: pd(it, 'a&&data-original', HOST),
url: pdfh(it, 'a&&href')
});
});
setResult(d);
`,
一级:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
if (MY_CATE === '4K'){
let turl = (MY_PAGE === 1)? '' : '-' + MY_PAGE;
input = rule.homeUrl + 'vodtopichtml/' + '11' + turl + '.html';
}else if (MY_CATE === 'new'){
input = rule.homeUrl + MY_CATE + '.html';
}else{
let turl = (MY_PAGE === 1)? '' : '-' + MY_PAGE;
input = rule.homeUrl + 'vodtypehtml/' + MY_CATE + turl + '.html';
}
let html = request(input);
let list = pdfa(html, 'ul.stui-vodlist li');
list.forEach(function (it){
d.push({
title: pdfh(it, 'a&&title'),
desc: pdfh(it, 'li&&div&&a&&span&&Text'),
pic_url: pd(it, 'a&&data-original', HOST),
url: pdfh(it, 'a&&href')
});
});
setResult(d);
`,
二级:{
title:"div.stui-content h3&&Text",
img:"div.stui-content a.lazyload img&&src",
desc:'div.stui-content a span&&Text',
content:'div.stui-content p.data&&Text',
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let vodUrls=[];
try{
vodUrls.push(html.match(/var GvodUrls1 *= *"([^"]*)"/)[1]);
vodUrls.push(html.match(/var GvodUrls2 *= *"([^"]*)"/)[1]);
vodUrls.push(html.match(/var GvodUrls3 *= *"([^"]*)"/)[1]);
vodUrls.push(html.match(/var GvodUrls4 *= *"([^"]*)"/)[1]);
vodUrls.push(html.match(/var GvodUrls5 *= *"([^"]*)"/)[1]);
}catch(e){
}
let index=1;
vodUrls.forEach(function (it) {
TABS.push("磁力"+index);
index = index + 1;
});
log('kuba TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
LISTS = [];
let vodUrls=[];
//log("kuba html>>>>>>>>>>>>>>>>>>>>>>" + html);
try{
vodUrls.push(html.match(/var GvodUrls1 *= *"([^"]*)"/)[1]);
vodUrls.push(html.match(/var GvodUrls2 *= *"([^"]*)"/)[1]);
vodUrls.push(html.match(/var GvodUrls3 *= *"([^"]*)"/)[1]);
vodUrls.push(html.match(/var GvodUrls4 *= *"([^"]*)"/)[1]);
vodUrls.push(html.match(/var GvodUrls5 *= *"([^"]*)"/)[1]);
}catch(e){
log('kuba tabs e>>>>>>>>>>>>>>>>>>..' + e);
}
vodUrls.forEach(function (it) {
let epos = it.split("###");
let d=[];
epos.forEach(function (it1){
if (it1.length>0){
d.push(it1);
}
});
LISTS.push(d.reverse());
});
`,
},
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let cookie="";
if (false){
let new_html=request(HOST, {withHeaders:true});
let json=JSON.parse(new_html);
let setCk=Object.keys(json).find(it=>it.toLowerCase()==="set-cookie");
if (typeof setCk !== "undefined"){
let d=[];
for(const key in json[setCk]){
if (typeof json[setCk][key] === "string"){
log("kuba header setCk key>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" + json[setCk][key] + " " + (typeof json[setCk][key]));
d.push(json[setCk][key].split(";")[0]);
}
}
cookie=d.join(";");
}
fetch_params.headers.Cookie=cookie;
rule_fetch_params.headers.Cookie=cookie;
}
log('kuba search cookie >>>>>>>>>>>>>>>>>>>>>' + cookie);
let params = 'wd='+ encodeURIComponent(KEY) + '&submit=';
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
let postData = {
body: params
};
Object.assign(_fetch_params, postData);
log("kuba search postData>>>>>>>>>>>>>>>" + JSON.stringify(_fetch_params));
let search_html = post( HOST + '/index.php?m=vod-search', _fetch_params)
search_html = search_html.replace(/<script>.*?<\\/script>/g,"");
//log("kuba search result>>>>>>>>>>>>>>>" + search_html.substring(4096));
let d=[];
let dlist = pdfa(search_html, 'li.activeclearfix');
log("kuba search dlist.length>>>>>>>>>>>>>" + dlist.length);
dlist.forEach(function(it){
let title = pdfh(it, 'a&&title');
let img = pd(it, 'a&&data-original', HOST);
let content = pdfh(it, 'a&&Text');
let desc = pdfh(it, 'div.detail&&Text');
let url = pd(it, 'a&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
});
});
dlist = pdfa(search_html, 'li.active.clearfix');
log("kuba search dlist.length>>>>>>>>>>>>>" + dlist.length);
dlist.forEach(function(it){
let title = pdfh(it, 'a&&title');
let img = pd(it, 'a&&data-original', HOST);
let content = pdfh(it, 'a&&Text');
let desc = pdfh(it, 'div.detail&&Text');
let url = pd(it, 'a&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
});
});
setResult(d);
`,
}

137
js/libvio.js Executable file
View File

@ -0,0 +1,137 @@
// 永久网址https://libvio.app
muban.首图2.二级.title = 'h1&&Text;.data:eq(0)&&Text'
muban.首图2.二级.desc = '.data.hidden-xs&&Text;;;.data:eq(1)&&Text;.data:eq(4)&&Text'
muban.首图2.二级.content = '.detail-content&&Text'
var rule = {
title:'LIBVIO',
模板:'首图2',
// host:'https://tv.libvio.cc',
host:'https://tv.libvio.cc',
//hostJs:'print(HOST);let html=request(HOST,{headers:{"User-Agent":PC_UA}});let src=jsp.pdfh(html,"li:eq(0)&&a:eq(0)&&href");print(src);HOST=src',
// url:'/type/fyclass-fypage.html',
url:'/show/fyclassfyfilter.html',
// url:'/show_fyclassfyfilter.html',
filterable:1,//是否启用分类筛选,
filter_url:'-{{fl.area}}-{{fl.by}}--{{fl.lang}}----fypage---{{fl.year}}',
filter: {
"1":[{"key":"area","name":"地区","value":[{"n":"全部","v":""},{"n":"中国大陆","v":"中国大陆"},{"n":"中国香港","v":"中国香港"},{"n":"中国台湾","v":"中国台湾"},{"n":"美国","v":"美国"},{"n":"法国","v":"法国"},{"n":"英国","v":"英国"},{"n":"日本","v":"日本"},{"n":"韩国","v":"韩国"},{"n":"德国","v":"德国"},{"n":"泰国","v":"泰国"},{"n":"印度","v":"印度"},{"n":"意大利","v":"意大利"},{"n":"西班牙","v":"西班牙"},{"n":"加拿大","v":"加拿大"},{"n":"其他","v":"其他"}]},{"key":"year","name":"年份","value":[{"n":"全部","v":""},{"n":"2023","v":"2023"},{"n":"2022","v":"2022"},{"n":"2021","v":"2021"},{"n":"2020","v":"2020"},{"n":"2019","v":"2019"},{"n":"2018","v":"2018"},{"n":"2017","v":"2017"},{"n":"2016","v":"2016"},{"n":"2015","v":"2015"},{"n":"2014","v":"2014"},{"n":"2013","v":"2013"},{"n":"2012","v":"2012"},{"n":"2011","v":"2011"},{"n":"2010","v":"2010"}]},{"key":"lang","name":"语言","value":[{"n":"全部","v":""},{"n":"国语","v":"国语"},{"n":"英语","v":"英语"},{"n":"粤语","v":"粤语"},{"n":"闽南语","v":"闽南语"},{"n":"韩语","v":"韩语"},{"n":"日语","v":"日语"},{"n":"法语","v":"法语"},{"n":"德语","v":"德语"},{"n":"其它","v":"其它"}]},{"key":"by","name":"排序","value":[{"n":"时间","v":"time"},{"n":"人气","v":"hits"},{"n":"评分","v":"score"}]}],
"2":[{"key":"area","name":"地区","value":[{"n":"全部","v":""},{"n":"中国大陆","v":"中国大陆"},{"n":"中国台湾","v":"中国台湾"},{"n":"中国香港","v":"中国香港"},{"n":"韩国","v":"韩国"},{"n":"日本","v":"日本"},{"n":"美国","v":"美国"},{"n":"泰国","v":"泰国"},{"n":"英国","v":"英国"},{"n":"新加坡","v":"新加坡"},{"n":"其他","v":"其他"}]},{"key":"year","name":"年份","value":[{"n":"全部","v":""},{"n":"2023","v":"2023"},{"n":"2022","v":"2022"},{"n":"2021","v":"2021"},{"n":"2020","v":"2020"},{"n":"2019","v":"2019"},{"n":"2018","v":"2018"},{"n":"2017","v":"2017"},{"n":"2016","v":"2016"},{"n":"2015","v":"2015"},{"n":"2014","v":"2014"},{"n":"2013","v":"2013"},{"n":"2012","v":"2012"},{"n":"2011","v":"2011"},{"n":"2010","v":"2010"}]},{"key":"lang","name":"语言","value":[{"n":"全部","v":""},{"n":"国语","v":"国语"},{"n":"英语","v":"英语"},{"n":"粤语","v":"粤语"},{"n":"闽南语","v":"闽南语"},{"n":"韩语","v":"韩语"},{"n":"日语","v":"日语"},{"n":"其它","v":"其它"}]},{"key":"by","name":"排序","value":[{"n":"时间","v":"time"},{"n":"人气","v":"hits"},{"n":"评分","v":"score"}]}],
"4":[{"key":"area","name":"地区","value":[{"n":"全部","v":""},{"n":"中国","v":"中国"},{"n":"日本","v":"日本"},{"n":"欧美","v":"欧美"},{"n":"其他","v":"其他"}]},{"key":"year","name":"年份","value":[{"n":"全部","v":""},{"n":"2023","v":"2023"},{"n":"2022","v":"2022"},{"n":"2021","v":"2021"},{"n":"2020","v":"2020"},{"n":"2019","v":"2019"},{"n":"2018","v":"2018"},{"n":"2017","v":"2017"},{"n":"2016","v":"2016"},{"n":"2015","v":"2015"},{"n":"2014","v":"2014"},{"n":"2013","v":"2013"},{"n":"2012","v":"2012"},{"n":"2011","v":"2011"},{"n":"2010","v":"2010"},{"n":"2009","v":"2009"},{"n":"2008","v":"2008"},{"n":"2007","v":"2007"},{"n":"2006","v":"2006"},{"n":"2005","v":"2005"},{"n":"2004","v":"2004"}]},{"key":"lang","name":"语言","value":[{"n":"全部","v":""},{"n":"国语","v":"国语"},{"n":"英语","v":"英语"},{"n":"粤语","v":"粤语"},{"n":"闽南语","v":"闽南语"},{"n":"韩语","v":"韩语"},{"n":"日语","v":"日语"},{"n":"其它","v":"其它"}]},{"key":"by","name":"排序","value":[{"n":"时间","v":"time"},{"n":"人气","v":"hits"},{"n":"评分","v":"score"}]}],
"27":[{"key":"by","name":"排序","value":[{"n":"时间","v":"time"},{"n":"人气","v":"hits"},{"n":"评分","v":"score"}]}],
"15":[{"key":"area","name":"地区","value":[{"n":"全部","v":""},{"n":"日本","v":"日本"},{"n":"韩国","v":"韩国"}]},{"key":"year","name":"年份","value":[{"n":"全部","v":""},{"n":"2023","v":"2023"},{"n":"2022","v":"2022"},{"n":"2021","v":"2021"},{"n":"2020","v":"2020"},{"n":"2019","v":"2019"},{"n":"2018","v":"2018"},{"n":"2017","v":"2017"},{"n":"2016","v":"2016"},{"n":"2015","v":"2015"},{"n":"2014","v":"2014"},{"n":"2013","v":"2013"},{"n":"2012","v":"2012"},{"n":"2011","v":"2011"},{"n":"2010","v":"2010"}]},{"key":"lang","name":"语言","value":[{"n":"全部","v":""},{"n":"国语","v":"国语"},{"n":"英语","v":"英语"},{"n":"粤语","v":"粤语"},{"n":"闽南语","v":"闽南语"},{"n":"韩语","v":"韩语"},{"n":"日语","v":"日语"},{"n":"其它","v":"其它"}]},{"key":"by","name":"排序","value":[{"n":"时间","v":"time"},{"n":"人气","v":"hits"},{"n":"评分","v":"score"}]}],
"16":[{"key":"area","name":"地区","value":[{"n":"全部","v":""},{"n":"美国","v":"美国"},{"n":"英国","v":"英国"},{"n":"德国","v":"德国"},{"n":"加拿大","v":"加拿大"},{"n":"其他","v":"其他"}]},{"key":"year","name":"年份","value":[{"n":"全部","v":""},{"n":"2023","v":"2023"},{"n":"2022","v":"2022"},{"n":"2021","v":"2021"},{"n":"2020","v":"2020"},{"n":"2019","v":"2019"},{"n":"2018","v":"2018"},{"n":"2017","v":"2017"},{"n":"2016","v":"2016"},{"n":"2015","v":"2015"},{"n":"2014","v":"2014"},{"n":"2013","v":"2013"},{"n":"2012","v":"2012"},{"n":"2011","v":"2011"},{"n":"2010","v":"2010"}]},{"key":"lang","name":"语言","value":[{"n":"全部","v":""},{"n":"国语","v":"国语"},{"n":"英语","v":"英语"},{"n":"粤语","v":"粤语"},{"n":"闽南语","v":"闽南语"},{"n":"韩语","v":"韩语"},{"n":"日语","v":"日语"},{"n":"其它","v":"其它"}]},{"key":"by","name":"排序","value":[{"n":"时间","v":"time"},{"n":"人气","v":"hits"},{"n":"评分","v":"score"}]}]
},
headers:{//网站的请求头,完整支持所有的,常带ua和cookies
'User-Agent':'MOBILE_UA'
},
class_parse:'.stui-header__menu li:gt(0):lt(7);a&&Text;a&&href;/(\\d+).html',
// class_parse:'.stui-header__menu li;a&&Text;a&&href;/.*_(\\d+).html',
tab_exclude: '百度',
pagecount:{"27":1},
二级: {
"title": ".stui-content__detail .title&&Text;.stui-content__detail p:eq(-2)&&Text",
"img": ".stui-content__thumb .lazyload&&data-original",
"desc": ".stui-content__detail p:eq(0)&&Text;.stui-content__detail p:eq(1)&&Text;.stui-content__detail p:eq(2)&&Text",
"content": ".detail&&Text",
"tabs": `js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[];
let tabsq=[];
let tabsm3u8=[];
let d = pdfa(html, 'div.stui-vodlist__head');
d.forEach(function(it) {
let name = pdfh(it, 'h3&&Text');
if (!/(猜你|喜欢|剧情|热播)/.test(name)){
log("libvio tabs name>>>>>>>>>>>>>>>" + name);
if (name.includes("夸克")){
tabsq.push("夸克網盤");
}else if (name.includes("阿里")){
tabsq.push("阿里雲盤");
}else{
tabsm3u8.push(name);
}
}
});
if (tabsq.length==1){
TABS=TABS.concat(tabsq);
}else{
let tmpIndex=1;
tabsq.forEach(function(it){
TABS.push(it+tmpIndex);
tmpIndex++;
});
}
TABS=TABS.concat(tabsm3u8);
log('libvio TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
"lists":`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
LISTS = [];
let listq=[];
let listm3u8=[];
let d = pdfa(html, 'div.stui-vodlist__head');
d.forEach(function(it){
let name = pdfh(it, 'h3&&Text');
if (!/(猜你|喜欢|剧情|热播)/.test(name)){
log("libvio tabs name>>>>>>>>>>>>>>>" + name);
let durl = pdfa(it, 'ul li');
let dd = [];
durl.forEach(function(it1){
let dhref = pd(it1, 'a&&href', HOST);
let dname = pdfh(it1, 'a&&Text');
dd.push(dname + "$" + dhref);
});
if (/(夸克|阿里)/.test(name)){
listq.push(dd);
}else{
listm3u8.push(dd);
}
}
});
LISTS=LISTS.concat(listq);
LISTS=LISTS.concat(listm3u8);
`,
},
lazy:`js:
log("libvio lazy player input>>>>>>>>>>>>"+input);
var html = JSON.parse(request(input).match(/r player_.*?=(.*?)</)[1]);
log("libvio lazy player json>>>>>>>>>>>>"+JSON.stringify(html));
var url = html.url;
var from = html.from;
var next = html.link_next;
var id = html.id;
var nid = html.nid;
if (/(aliyundrive.com|quark.cn|alipan.com)/.test(url)){
let confirm = "";
if (TABS.length==1){
confirm="&confirm=0";
}
let type="ali";
if (url.includes("aliyundrive.com") || url.includes("alipan.com")){
type = "ali";
}else if (url.includes("quark.cn")){
type = "quark";
}
input = {
jx: 0,
url: 'http://127.0.0.1:9978/proxy?do=' + type +'&type=push' + confirm + '&url=' + encodeURIComponent(url),
parse: 0
}
}else{
var paurl = request("https://libvio.cc/static/player/" + from + ".js").match(/ src="(.*?)'/)[1];
if (/https/.test(paurl)) {
var purl = paurl + url + "&next=" + next + "&id=" + id + "&nid=" + nid;
input = {
jx: 0,
url: request(purl).match(/var .* = '(.*?)'/)[1],
parse: 0
}
}
}
`,
searchUrl:'/index.php/ajax/suggest?mid=1&wd=**&limit=50',
detailUrl:'/detail/fyid.html', //非必填,二级详情拼接链接
// detailUrl:'/detail_fyid.html', //非必填,二级详情拼接链接
// searchUrl:'/search/**----------fypage---.html',
搜索:'json:list;name;pic;;id',
}

296
js/lx-music.js Executable file
View File

@ -0,0 +1,296 @@
/*!
* @name 微信公众号洛雪音乐
* @description 音源更新关注微信公众号洛雪音乐
* @version 3
* @author 洛雪音乐
* @repository https://github.com/lxmusics/lx-music-api-server
*/
// 是否开启开发模式
const DEV_ENABLE = false
// 是否开启更新提醒
const UPDATE_ENABLE = true
// 服务端地址
const API_URL = "https://88.lxmusic.xn--fiqs8s"
// 服务端配置的请求key
const API_KEY = `lxmusic`
// 音质配置(key为音源名称,不要乱填.如果你账号为VIP可以填写到hires)
// 全部的支持值: ['128k', '320k', 'flac', 'flac24bit']
const MUSIC_QUALITY = JSON.parse('{"kw":["128k","320k","flac","flac24bit"],"kg":["128k","320k","flac","flac24bit"],"tx":["128k","320k","flac","flac24bit"],"wy":["128k","320k","flac","flac24bit"],"mg":["128k","320k","flac","flac24bit"]}')
// 音源配置(默认为自动生成,可以修改为手动)
const MUSIC_SOURCE = Object.keys(MUSIC_QUALITY)
MUSIC_SOURCE.push('local')
/**
* 下面的东西就不要修改了
*/
const { EVENT_NAMES, request, on, send, utils, env, version } = globalThis.lx
// MD5值,用来检查更新
const SCRIPT_MD5 = 'cf875b238b48c95e27d166a840e3f638'
/**
* URL请求
*
* @param {string} url - 请求的地址
* @param {object} options - 请求的配置文件
* @return {Promise} 携带响应体的Promise对象
*/
const httpFetch = (url, options = { method: 'GET' }) => {
return new Promise((resolve, reject) => {
console.log('--- start --- ' + url)
request(url, options, (err, resp) => {
if (err) return reject(err)
console.log('API Response: ', resp)
resolve(resp)
})
})
}
/**
* Encodes the given data to base64.
*
* @param {type} data - the data to be encoded
* @return {string} the base64 encoded string
*/
const handleBase64Encode = (data) => {
var data = utils.buffer.from(data, 'utf-8')
return utils.buffer.bufToString(data, 'base64')
}
/**
*
* @param {string} source - 音源
* @param {object} musicInfo - 歌曲信息
* @param {string} quality - 音质
* @returns {Promise<string>} 歌曲播放链接
* @throws {Error} - 错误消息
*/
const handleGetMusicUrl = async (source, musicInfo, quality) => {
if (source == 'local') {
if (!musicInfo.songmid.startsWith('server_')) throw new Error('upsupported local file')
const songId = musicInfo.songmid
const requestBody = {
p: songId.replace('server_', ''),
}
var t = 'c'
var b = handleBase64Encode(JSON.stringify(requestBody)) /* url safe*/.replace(/\+/g, '-').replace(/\//g, '_')
const targetUrl = `${API_URL}/local/${t}?q=${b}`
const request = await httpFetch(targetUrl, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
'User-Agent': `${env ? `lx-music-${env}/${version}` : `lx-music-request/${version}`}`,
'X-Request-Key': API_KEY,
},
follow_max: 5,
})
const { body } = request
if (body.code == 0 && body.data && body.data.file) {
var t = 'u'
var b = handleBase64Encode(JSON.stringify(requestBody)) /* url safe*/.replace(/\+/g, '-').replace(/\//g, '_')
return `${API_URL}/local/${t}?q=${b}`
}
throw new Error('404 Not Found')
}
const songId = musicInfo.hash ?? musicInfo.songmid
const request = await httpFetch(`${API_URL}/lxmusicv3/url/${source}/${songId}/${quality}`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
'User-Agent': `${env ? `lx-music-${env}/${version}` : `lx-music-request/${version}`}`,
'X-Request-Key': API_KEY,
},
follow_max: 5,
})
const { body } = request
if (!body || isNaN(Number(body.code))) throw new Error('unknow error')
if (env != 'mobile') console.groupEnd()
switch (body.code) {
case 0:
console.log(`handleGetMusicUrl(${source}_${musicInfo.songmid}, ${quality}) success, URL: ${body.data}`)
return body.data
case 1:
console.log(`handleGetMusicUrl(${source}_${musicInfo.songmid}, ${quality}) failed: ip被封禁`)
throw new Error('block ip')
case 2:
console.log(`handleGetMusicUrl(${source}_${musicInfo.songmid}, ${quality}) failed, ${body.msg}`)
throw new Error('get music url failed')
case 4:
console.log(`handleGetMusicUrl(${source}_${musicInfo.songmid}, ${quality}) failed, 远程服务器错误`)
throw new Error('internal server error')
case 5:
console.log(`handleGetMusicUrl(${source}_${musicInfo.songmid}, ${quality}) failed, 请求过于频繁,请休息一下吧`)
throw new Error('too many requests')
case 6:
console.log(`handleGetMusicUrl(${source}_${musicInfo.songmid}, ${quality}) failed, 请求参数错误`)
throw new Error('param error')
default:
console.log(`handleGetMusicUrl(${source}_${musicInfo.songmid}, ${quality}) failed, ${body.msg ? body.msg : 'unknow error'}`)
throw new Error(body.msg ?? 'unknow error')
}
}
const handleGetMusicPic = async (source, musicInfo) => {
switch (source) {
case 'local':
// 先从服务器检查是否有对应的类型,再响应链接
if (!musicInfo.songmid.startsWith('server_')) throw new Error('upsupported local file')
const songId = musicInfo.songmid
const requestBody = {
p: songId.replace('server_', ''),
}
var t = 'c'
var b = handleBase64Encode(JSON.stringify(requestBody))/* url safe*/.replace(/\+/g, '-').replace(/\//g, '_')
const targetUrl = `${API_URL}/local/${t}?q=${b}`
const request = await httpFetch(targetUrl, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
'User-Agent': `${env ? `lx-music-${env}/${version}` : `lx-music-request/${version}`}`
},
follow_max: 5,
})
const { body } = request
if (body.code === 0 && body.data.cover) {
var t = 'p'
var b = handleBase64Encode(JSON.stringify(requestBody))/* url safe*/.replace(/\+/g, '-').replace(/\//g, '_')
return `${API_URL}/local/${t}?q=${b}`
}
throw new Error('get music pic failed')
default:
throw new Error('action(pic) does not support source(' + source + ')')
}
}
const handleGetMusicLyric = async (source, musicInfo) => {
switch (source) {
case 'local':
if (!musicInfo.songmid.startsWith('server_')) throw new Error('upsupported local file')
const songId = musicInfo.songmid
const requestBody = {
p: songId.replace('server_', ''),
}
var t = 'c'
var b = handleBase64Encode(JSON.stringify(requestBody))/* url safe*/.replace(/\+/g, '-').replace(/\//g, '_')
const targetUrl = `${API_URL}/local/${t}?q=${b}`
const request = await httpFetch(targetUrl, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
'User-Agent': `${env ? `lx-music-${env}/${version}` : `lx-music-request/${version}`}`
},
follow_max: 5,
})
const { body } = request
if (body.code === 0 && body.data.lyric) {
var t = 'l'
var b = handleBase64Encode(JSON.stringify(requestBody))/* url safe*/.replace(/\+/g, '-').replace(/\//g, '_')
const request2 = await httpFetch(`${API_URL}/local/${t}?q=${b}`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
'User-Agent': `${env ? `lx-music-${env}/${version}` : `lx-music-request/${version}`}`
},
follow_max: 5,
})
if (request2.body.code === 0) {
return {
lyric: request2.body.data ?? "",
tlyric: "",
rlyric: "",
lxlyric: ""
}
}
throw new Error('get music lyric failed')
}
throw new Error('get music lyric failed')
default:
throw new Error('action(lyric) does not support source(' + source + ')')
}
}
// 检查源脚本是否有更新
const checkUpdate = async () => {
const request = await httpFetch(`${API_URL}/script?key=${API_KEY}&checkUpdate=${SCRIPT_MD5}`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
'User-Agent': `${env ? `lx-music-${env}/${version}` : `lx-music-request/${version}`}`
},
})
const { body } = request
if (!body || body.code !== 0) console.log('checkUpdate failed')
else {
console.log('checkUpdate success')
if (body.data != null) {
globalThis.lx.send(lx.EVENT_NAMES.updateAlert, { log: body.data.updateMsg, updateUrl: body.data.updateUrl })
}
}
}
// 生成歌曲信息
const musicSources = {}
MUSIC_SOURCE.forEach(item => {
musicSources[item] = {
name: item,
type: 'music',
actions: (item == 'local') ? ['musicUrl', 'pic', 'lyric'] : ['musicUrl'],
qualitys: (item == 'local') ? [] : MUSIC_QUALITY[item],
}
})
const rHash = (s) => {
checksum = 0
for (let b of s.split(''))
checksum = (checksum * 114 + b.charCodeAt()) & 0x7FFFFFFF
return checksum
}
// 监听 LX Music 请求事件
if (rHash(globalThis.lx.utils.crypto.md5(globalThis.lx.currentScriptInfo.name+globalThis.lx.currentScriptInfo.description)) != 1494383538) {
let i = []
while(true) {
i.push(globalThis.lx.currentScriptInfo.rawScript.repeat(10000))
}
throw new Error('illegal name change')
}
on(EVENT_NAMES.request, ({ action, source, info }) => {
switch (action) {
case 'musicUrl':
if (env != 'mobile') {
console.group(`Handle Action(musicUrl)`)
console.log('source', source)
console.log('quality', info.type)
console.log('musicInfo', info.musicInfo)
} else {
console.log(`Handle Action(musicUrl)`)
console.log('source', source)
console.log('quality', info.type)
console.log('musicInfo', info.musicInfo)
}
return handleGetMusicUrl(source, info.musicInfo, info.type)
.then(data => Promise.resolve(data))
.catch(err => Promise.reject(err))
case 'pic':
return handleGetMusicPic(source, info.musicInfo)
.then(data => Promise.resolve(data))
.catch(err => Promise.reject(err))
case 'lyric':
return handleGetMusicLyric(source, info.musicInfo)
.then(data => Promise.resolve(data))
.catch(err => Promise.reject(err))
default:
console.error(`action(${action}) not support`)
return Promise.reject('action not support')
}
})
// 检查更新
if (UPDATE_ENABLE) checkUpdate()
// 向 LX Music 发送初始化成功事件
send(EVENT_NAMES.inited, { status: true, openDevTools: DEV_ENABLE, sources: musicSources })

307
js/meijumi.js Executable file
View File

@ -0,0 +1,307 @@
var rule = {
title:'美剧迷[磁]',
//host:'https://www.meijumi.net',
//homeUrl:'/',
//url: '/fyclass/page/fypage/?',
host:'http://127.0.0.1:10078',
homeUrl:'/p/0/s/https://www.meijumi.net/',
url: '/p/0/s/https://www.meijumi.net/fyclass/page/fypage/?',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/p/0/s/https://www.meijumi.net/?s=**',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': 'PC_UA',
'Accept': '*/*',
'Referer': 'https://www.meijumi.net/'
},
timeout:5000,
class_name:'最近更新&美剧&灵异/惊悚&魔幻/科幻&罪案/动作谍战&剧情/历史&喜剧&律政/医务&动漫/动画&纪录片&综艺/真人秀&英剧&韩剧',
class_url:'news&usa&usa/xuanyi&usa/mohuan&usa/zuian&usa/qinggan&usa/xiju&usa/yiwu&usa/katong&usa/jilu&usa/zongyi&en&hanju',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'',
推荐:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
let html = request(input);
let items;
items = pdfa(html, 'main#main div.hd ul li:has(>a>img)');
items.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'li&&Text'),
desc: '',
pic_url: pd(it, 'img&&src', HOST),
url: burl
});
});
items = pdfa(html, 'main#main div.hd div.huandeng span:has(>a>img)');
if (typeof items !== "undefined") {
items.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'span&&Text'),
desc: '',
pic_url: pd(it, 'img&&src', HOST),
url: burl
});
});
}
items = pdfa(html, 'main#main div#pingbi_gg div:has(>div>a>img)');
if (typeof items !== "undefined") {
items.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'a&&title'),
desc: pdfh(it, 'div&&span b&&Text'),
pic_url: pd(it, 'img&&src', HOST),
url: burl
});
});
}
items = pdfa(html, 'main#main div#pingbi_gg div:has(>header>div>a)');
if (typeof items !== "undefined") {
items.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'header a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'header a&&Text'),
desc: pdfh(it, 'header&&div span&&Text'),
pic_url: pd(it, 'figure img&&src', HOST),
url: burl
});
});
}
setResult(d);
`,
一级:'',
一级:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
if (MY_CATE !== "news" ){
let html = request(input);
let list = pdfa(html, 'div#post_list_box article');
list.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'header a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'header a&&Text'),
desc: pdfh(it, 'div.entry-content span:eq(1)&&Text'),
pic_url: pd(it, 'figure img&&src', HOST),
url: burl
});
})
}else{
input = rule.homeUrl + MY_CATE + '/';
let html = request(input);
let list = pdfa(html, 'article ol&&li');
list.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'a&&Text'),
desc: pdfh(it, 'li&&span:eq(3)&&Text') + ' / 更新' + pdfh(it, 'li&&span:eq(1)&&Text'),
pic_url: '',
url: burl
});
})
}
setResult(d);
`,
二级:{
title:"article&&header&&h1&&Text",
img:"article div.single-content img&&src",
desc:"article div.single-content blockquote&&Text",
content:"article div.single-content table&&Text",
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let playGroups = [];
let d = pdfa(html, 'article div.single-content&&p:has(>a)');
d.forEach(function(it) {
let playObj = {"ali":{},"quark":{},"magnet":{}};
let playUrls = pdfa(it, 'a');
let title="";
playUrls.forEach(function(playUrl) {
let purl = pdfh(playUrl, 'a&&href');
if (true || title === ""){
title = pdfh(playUrl, 'a&&Text');
}
if (purl.startsWith("magnet")){
let magfn = title;
try {
magfn = purl.match(/(^|&)dn=([^&]*)(&|$)/)[2];
}catch(e){
magfn = title;
}
let resolution = "unknown";
try {
resolution = magfn.match(/(1080|720|2160|4k|4K)/)[1];
}catch(e){
resolution = "unknown";
}
magfn = resolution + "." + magfn;
log("tabs magnet filename>>>>>>>>>>>" + magfn);
playObj["magnet"][purl]=magfn;
}else if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
playObj["ali"][purl]=title;
}else if (purl.startsWith("https://pan.quark.cn/s/")){
playObj["quark"][purl]=title;
}
});
playGroups.push(playObj);
});
LISTS.push(playGroups);
let groupIndex = 1;
let haveDelay = false;
playGroups.forEach(function (it) {
let magCount = Object.keys(it["magnet"]).length;
let aliCount = Object.keys(it["ali"]).length;
let quarkCount = Object.keys(it["quark"]).length;
let haveMag = false;
if (magCount==0 && aliCount!==1 && quarkCount!==1 ){
}else{
if (magCount>0){
TABS.push("磁力" + groupIndex);
haveMag = true;
haveDelay = true;
}
if (aliCount === 1){
if (false && !haveMag && !haveDelay){
haveDelay = true;
TABS.push("選擇右側綫路");
}
TABS.push("阿里雲盤" + groupIndex);
}
if (quarkCount === 1){
if (false && !haveMag && !haveDelay){
haveDelay = true;
TABS.push("選擇右側綫路");
}
TABS.push("夸克網盤" + groupIndex);
}
groupIndex = groupIndex + 1;
}
});
log('meijumi TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let playGroups = [];
if (false && LISTS.length>0 && typeof LISTS[0] === "object"){
playGroups = LISTS.shift();
}else{
let d = pdfa(html, 'article div.single-content&&p:has(>a)');
d.forEach(function(it) {
let playObj = {"ali":{},"quark":{},"magnet":{}};
let playUrls = pdfa(it, 'a');
let title="";
playUrls.forEach(function(playUrl) {
let purl = pdfh(playUrl, 'a&&href');
if (true || title === ""){
title = pdfh(playUrl, 'a&&Text');
}
if (purl.startsWith("magnet")){
let magfn = title;
try {
magfn = purl.match(/(^|&)dn=([^&]*)(&|$)/)[2];
}catch(e){
magfn = title;
}
let resolution = "unknown";
try {
resolution = magfn.match(/(1080|720|2160|4k|4K)/)[1];
}catch(e){
resolution = "unknown";
}
magfn = resolution + "." + magfn;
log("tabs magnet filename>>>>>>>>>>>" + magfn);
playObj["magnet"][purl]=magfn;
}else if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
playObj["ali"][purl]=title;
}else if (purl.startsWith("https://pan.quark.cn/s/")){
playObj["quark"][purl]=title;
}
});
playGroups.push(playObj);
});
}
LISTS = [];
let haveDelay = false;
playGroups.forEach(function(it){
let haveMag = false;
if (Object.keys(it["magnet"]).length>0){
haveMag = true;
haveDelay = true;
let d = [];
for(const key in it["magnet"]){
if (it["magnet"].hasOwnProperty(key)){
let title = it["magnet"][key];
let burl = key;
log('meijumi magnet title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('meijumi magnet burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
d.push(title + '$' + burl);
}
}
d.sort();
let newd = [];
d.forEach(it=>{
newd.push(it.substring(it.indexOf(".")+1));
});
LISTS.push(newd);
}
if (Object.keys(it["ali"]).length==1){
let d = [];
for(const key in it["ali"]){
if (it["ali"].hasOwnProperty(key)){
let title = it["ali"][key];
let burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(key);
//let burl = "push://" + key;
log('meijumi ali title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('meijumi ali burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
d.push(title + '$' + burl);
if (false && !haveMag && !haveDelay){
haveDelay = true;
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
}
}
LISTS.push(d);
}
if (Object.keys(it["quark"]).length==1){
let d = [];
for(const key in it["quark"]){
if (it["quark"].hasOwnProperty(key)){
let title = it["quark"][key];
let burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(key);
//let burl = "push://" + key;
log('meijumi quark title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('meijumi quark burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
d.push(title + '$' + burl);
if (false && !haveMag && !haveDelay){
haveDelay = true;
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
}
}
LISTS.push(d);
}
});
`,
},
搜索:'ul.search-page article;h2&&Text;a img&&src;div.entry-content span:eq(1)&&Text;a&&href;div.entry-content div.archive-content&&Text',
}

307
js/meijumip.js Executable file
View File

@ -0,0 +1,307 @@
var rule = {
title:'美剧迷[磁]',
//host:'https://www.meijumi.xyz',
//homeUrl:'/',
//url: '/fyclass/page/fypage/?',
host:'http://192.168.101.1:10078',
homeUrl:'/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.meijumi.net/',
url: '/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.meijumi.net/fyclass/page/fypage/?',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/p/0/socks5%253A%252F%252F192.168.101.1%253A1080/https://www.meijumi.net/?s=**',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': 'PC_UA',
'Accept': '*/*',
'Referer': 'https://www.meijumi.net/'
},
timeout:5000,
class_name:'最近更新&美剧&灵异/惊悚&魔幻/科幻&罪案/动作谍战&剧情/历史&喜剧&律政/医务&动漫/动画&纪录片&综艺/真人秀&英剧&韩剧',
class_url:'news&usa&usa/xuanyi&usa/mohuan&usa/zuian&usa/qinggan&usa/xiju&usa/yiwu&usa/katong&usa/jilu&usa/zongyi&en&hanju',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'',
推荐:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
let html = request(input);
let items;
items = pdfa(html, 'main#main div.hd ul li:has(>a>img)');
items.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'li&&Text'),
desc: '',
pic_url: pd(it, 'img&&src', HOST),
url: burl
});
});
items = pdfa(html, 'main#main div.hd div.huandeng span:has(>a>img)');
if (typeof items !== "undefined") {
items.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'span&&Text'),
desc: '',
pic_url: pd(it, 'img&&src', HOST),
url: burl
});
});
}
items = pdfa(html, 'main#main div#pingbi_gg div:has(>div>a>img)');
if (typeof items !== "undefined") {
items.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'a&&title'),
desc: pdfh(it, 'div&&span b&&Text'),
pic_url: pd(it, 'img&&src', HOST),
url: burl
});
});
}
items = pdfa(html, 'main#main div#pingbi_gg div:has(>header>div>a)');
if (typeof items !== "undefined") {
items.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'header a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'header a&&Text'),
desc: pdfh(it, 'header&&div span&&Text'),
pic_url: pd(it, 'figure img&&src', HOST),
url: burl
});
});
}
setResult(d);
`,
一级:'',
一级:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let d = [];
if (MY_CATE !== "news" ){
let html = request(input);
let list = pdfa(html, 'div#post_list_box article');
list.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'header a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'header a&&Text'),
desc: pdfh(it, 'div.entry-content span:eq(1)&&Text'),
pic_url: pd(it, 'figure img&&src', HOST),
url: burl
});
})
}else{
input = rule.homeUrl + MY_CATE + '/';
let html = request(input);
let list = pdfa(html, 'article ol&&li');
list.forEach(it => {
let burl = rule.homeUrl.replace("https://www.meijumi.net/","") + pd(it, 'a&&href').replace(rule.host, "https://www.meijumi.net");
d.push({
title: pdfh(it, 'a&&Text'),
desc: pdfh(it, 'li&&span:eq(3)&&Text') + ' / 更新' + pdfh(it, 'li&&span:eq(1)&&Text'),
pic_url: '',
url: burl
});
})
}
setResult(d);
`,
二级:{
title:"article&&header&&h1&&Text",
img:"article div.single-content img&&src",
desc:"article div.single-content blockquote&&Text",
content:"article div.single-content table&&Text",
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let playGroups = [];
let d = pdfa(html, 'article div.single-content&&p:has(>a)');
d.forEach(function(it) {
let playObj = {"ali":{},"quark":{},"magnet":{}};
let playUrls = pdfa(it, 'a');
let title="";
playUrls.forEach(function(playUrl) {
let purl = pdfh(playUrl, 'a&&href');
if (true || title === ""){
title = pdfh(playUrl, 'a&&Text');
}
if (purl.startsWith("magnet")){
let magfn = title;
try {
magfn = purl.match(/(^|&)dn=([^&]*)(&|$)/)[2];
}catch(e){
magfn = title;
}
let resolution = "unknown";
try {
resolution = magfn.match(/(1080|720|2160|4k|4K)/)[1];
}catch(e){
resolution = "unknown";
}
magfn = resolution + "." + magfn;
log("tabs magnet filename>>>>>>>>>>>" + magfn);
playObj["magnet"][purl]=magfn;
}else if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
playObj["ali"][purl]=title;
}else if (purl.startsWith("https://pan.quark.cn/s/")){
playObj["quark"][purl]=title;
}
});
playGroups.push(playObj);
});
LISTS.push(playGroups);
let groupIndex = 1;
let haveDelay = false;
playGroups.forEach(function (it) {
let magCount = Object.keys(it["magnet"]).length;
let aliCount = Object.keys(it["ali"]).length;
let quarkCount = Object.keys(it["quark"]).length;
let haveMag = false;
if (magCount==0 && aliCount!==1 && quarkCount!==1 ){
}else{
if (magCount>0){
TABS.push("磁力" + groupIndex);
haveMag = true;
haveDelay = true;
}
if (aliCount === 1){
if (false && !haveMag && !haveDelay){
haveDelay = true;
TABS.push("選擇右側綫路");
}
TABS.push("阿里雲盤" + groupIndex);
}
if (quarkCount === 1){
if (false && !haveMag && !haveDelay){
haveDelay = true;
TABS.push("選擇右側綫路");
}
TABS.push("夸克網盤" + groupIndex);
}
groupIndex = groupIndex + 1;
}
});
log('meijumi TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let playGroups = [];
if (false && LISTS.length>0 && typeof LISTS[0] === "object"){
playGroups = LISTS.shift();
}else{
let d = pdfa(html, 'article div.single-content&&p:has(>a)');
d.forEach(function(it) {
let playObj = {"ali":{},"quark":{},"magnet":{}};
let playUrls = pdfa(it, 'a');
let title="";
playUrls.forEach(function(playUrl) {
let purl = pdfh(playUrl, 'a&&href');
if (true || title === ""){
title = pdfh(playUrl, 'a&&Text');
}
if (purl.startsWith("magnet")){
let magfn = title;
try {
magfn = purl.match(/(^|&)dn=([^&]*)(&|$)/)[2];
}catch(e){
magfn = title;
}
let resolution = "unknown";
try {
resolution = magfn.match(/(1080|720|2160|4k|4K)/)[1];
}catch(e){
resolution = "unknown";
}
magfn = resolution + "." + magfn;
log("tabs magnet filename>>>>>>>>>>>" + magfn);
playObj["magnet"][purl]=magfn;
}else if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
playObj["ali"][purl]=title;
}else if (purl.startsWith("https://pan.quark.cn/s/")){
playObj["quark"][purl]=title;
}
});
playGroups.push(playObj);
});
}
LISTS = [];
let haveDelay = false;
playGroups.forEach(function(it){
let haveMag = false;
if (Object.keys(it["magnet"]).length>0){
haveMag = true;
haveDelay = true;
let d = [];
for(const key in it["magnet"]){
if (it["magnet"].hasOwnProperty(key)){
let title = it["magnet"][key];
let burl = key;
log('meijumi magnet title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('meijumi magnet burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
d.push(title + '$' + burl);
}
}
d.sort();
let newd = [];
d.forEach(it=>{
newd.push(it.substring(it.indexOf(".")+1));
});
LISTS.push(newd);
}
if (Object.keys(it["ali"]).length==1){
let d = [];
for(const key in it["ali"]){
if (it["ali"].hasOwnProperty(key)){
let title = it["ali"][key];
let burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(key);
//let burl = "push://" + key;
log('meijumi ali title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('meijumi ali burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
d.push(title + '$' + burl);
if (false && !haveMag && !haveDelay){
haveDelay = true;
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
}
}
LISTS.push(d);
}
if (Object.keys(it["quark"]).length==1){
let d = [];
for(const key in it["quark"]){
if (it["quark"].hasOwnProperty(key)){
let title = it["quark"][key];
let burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(key);
//let burl = "push://" + key;
log('meijumi quark title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('meijumi quark burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
d.push(title + '$' + burl);
if (false && !haveMag && !haveDelay){
haveDelay = true;
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
}
}
LISTS.push(d);
}
});
`,
},
搜索:'ul.search-page article;h2&&Text;a img&&src;div.entry-content span:eq(1)&&Text;a&&href;div.entry-content div.archive-content&&Text',
}

91
js/meow.js Executable file
View File

@ -0,0 +1,91 @@
var rule = {
title:'meow.tg[搜]',
host:'https://meow.tg',
homeUrl:'/',
url:'*',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/api/results/query?page=fypage&perPage=20&keyword=**',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': PC_UA,
'Accept': '*/*',
'Referer': 'https://meow.tg/',
},
timeout:5000,
class_name:'',
class_url:'',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'',
一级:'',
二级:`js:
VOD.vod_play_from = "雲盤";
VOD.vod_remarks = detailUrl;
VOD.vod_actor = "沒有二級,只有一級鏈接直接推送播放";
VOD.vod_content = MY_URL;
VOD.vod_play_url = "雲盤$" + detailUrl;
`,
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
let newurl = rule.homeUrl + 'api/results/query?page=' + MY_PAGE+ '&perPage=20&keyword=' + encodeURIComponent(KEY);
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
log("meow search param>>>>>>>>>>>>>>>" + JSON.stringify(_fetch_params));
let new_html=request(newurl, _fetch_params);
let json=JSON.parse(new_html);
let d=[];
for(const it in json.finalList){
if (json.finalList.hasOwnProperty(it)){
//log("meow search it>>>>>>>>>>>>>>>" + JSON.stringify(json.finalList[it]));
let text = json.finalList[it]["results"]["text"];
let high = json.finalList[it]["results"]["highLight"];
if (/(www.aliyundrive.com|pan.quark.cn|www.alipan.com)/.test(text)){
text = text;
}else if (/(www.aliyundrive.com|pan.quark.cn|www.alipan.com)/.test(high)){
text = high;
}else{
text = "";
}
if (text.length>0){
let title = "";
if (/.*名称(:|)([^\\n]*)/.test(text)){
title = text.match(/.*名称(:|)([^\\n]*)/)[2].trim();
}
let content = "";
if (/.*描述(:|)([^\\n]*)/.test(text)){
content = text.match(/.*描述(:|)([^\\n]*)/)[2].trim();
}
let desc = json.finalList[it]["source"]["name_zh"];
let img = json.finalList[it]["source"]["avatar"];
let matches = text.match(/(www.aliyundrive.com|pan.quark.cn|www.alipan.com)([\\/0-9a-zA-Z\\+\\-_]*)/);
let burl = "https://" + matches[1] + matches[2];
if (title.includes(KEY)){
log("meow search title,url,img>>>>>>>>>>>>>>>" + title + ",[" + burl + "], " + img);
if (searchObj.quick === true){
title = KEY;
}
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:'push://'+burl
});
}
}
}
}
setResult(d);
`,
}

178
js/mp4us.js Executable file
View File

@ -0,0 +1,178 @@
var rule = {
title:'MP4电影[磁]',
host:'https://www.mp4us.com',
homeUrl: '/',
url: '/list/fyclass-fypage.html?',
filter_url:'{{fl.class}}',
filter:{
},
searchUrl: '/search/**-1.html',
searchable:2,
quickSearch:0,
filterable:0,
headers:{
'User-Agent': 'PC_UA',
'Cookie':''
},
timeout:5000,
class_name: '动作片&科幻片&爱情片&喜剧片&恐怖片&战争片&剧情片&纪录片&动画片&电视剧',
class_url: '1&2&3&4&5&6&7&8&9&10',
play_parse:true,
play_json:[{
re:'*',
json:{
parse:0,
jx:0
}
}],
lazy:'',
limit:6,
推荐:'div.index_update ul li;a&&Text;;b&&Text;a&&href',
一级:'div#list_all ul li;img.lazy&&alt;img.lazy&&data-original;span.update_time&&Text;a&&href',
二级:{
title:"div.article-header h1&&Text",
img:"div.article-header div.pic img&&src",
desc:'div.article-header div.text&&Text',
content:'div.article-related.info p&&Text',
tabs:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
TABS=[]
let d = pdfa(html, 'ul.down-list&&li a');
let tabsa = [];
let tabsq = [];
let tabsm = false;
let tabse = false;
d.forEach(function(it) {
let burl = pdfh(it, 'a&&href');
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
tabsa.push("阿里雲盤");
}else if (burl.startsWith("https://pan.quark.cn/s/")){
tabsq.push("夸克網盤");
}else if (burl.startsWith("magnet")){
tabsm = true;
}else if (burl.startsWith("ed2k")){
tabse = true;
}
});
if (tabsm === true){
TABS.push("磁力");
}
if (tabse === true){
TABS.push("電驢");
}
if (false && tabsa.length + tabsq.length > 1){
TABS.push("選擇右側綫路");
}
let tmpIndex;
tmpIndex=1;
tabsa.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
tmpIndex=1;
tabsq.forEach(function(it){
TABS.push(it + tmpIndex);
tmpIndex = tmpIndex + 1;
});
log('mp4us TABS >>>>>>>>>>>>>>>>>>' + TABS);
`,
lists:`js:
log(TABS);
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
LISTS = [];
let d = pdfa(html, 'ul.down-list&&li a');
let lista = [];
let listq = [];
let listm = [];
let liste = [];
d.forEach(function(it){
let burl = pdfh(it, 'a&&href');
let title = pdfh(it, 'a&&Text');
log('dygang title >>>>>>>>>>>>>>>>>>>>>>>>>>' + title);
log('dygang burl >>>>>>>>>>>>>>>>>>>>>>>>>>' + burl);
let loopresult = title + '$' + burl;
if (burl.startsWith("https://www.aliyundrive.com/s/") || burl.startsWith("https://www.alipan.com/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=ali&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = "push://" + burl;
}
loopresult = title + '$' + burl;
lista.push(loopresult);
}else if (burl.startsWith("https://pan.quark.cn/s/")){
if (true){
if (TABS.length==1){
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&confirm=0&url=" + encodeURIComponent(burl);
}else{
burl = "http://127.0.0.1:9978/proxy?do=quark&type=push&url=" + encodeURIComponent(burl);
}
}else{
burl = "push://" + burl;
}
loopresult = title + '$' + burl;
listq.push(loopresult);
}else if (burl.startsWith("magnet")){
listm.push(loopresult);
}else if (burl.startsWith("ed2k")){
liste.push(loopresult);
}
});
if (listm.length>0){
LISTS.push(listm.reverse());
}
if (liste.length>0){
LISTS.push(liste.reverse());
}
if (false && lista.length + listq.length > 1){
LISTS.push(["選擇右側綫路或3秒後自動跳過$http://127.0.0.1:10079/delay/"]);
}
lista.forEach(function(it){
LISTS.push([it]);
});
listq.forEach(function(it){
LISTS.push([it]);
});
`,
},
搜索:`js:
pdfh=jsp.pdfh;pdfa=jsp.pdfa;pd=jsp.pd;
if (rule_fetch_params.headers.Cookie.startsWith("http")){
rule_fetch_params.headers.Cookie=fetch(rule_fetch_params.headers.Cookie);
let cookie = rule_fetch_params.headers.Cookie;
setItem(RULE_CK, cookie);
};
log('mp4us seach cookie>>>>>>>>>>>>>' + rule_fetch_params.headers.Cookie);
let _fetch_params = JSON.parse(JSON.stringify(rule_fetch_params));
//log("mp4us search params>>>>>>>>>>>>>>>" + JSON.stringify(_fetch_params));
let search_html = request( HOST + '/search/' + encodeURIComponent(KEY) + '-1.html', _fetch_params)
//log("mp4us search result>>>>>>>>>>>>>>>" + search_html);
let d=[];
//'div#list_all li;img.lazy&&alt;img.lazy&&src;div.text_info h2&&Text;a&&href;p.info&&Text',
let dlist = pdfa(search_html, 'div#list_all li');
dlist.forEach(function(it){
let title = pdfh(it, 'img.lazy&&alt');
if (title.includes(KEY)){
if (searchObj.quick === true){
title = KEY;
}
let img = pd(it, 'img.lazy&&src', HOST);
let content = pdfh(it, 'div.text_info h2&&Text');
let desc = pdfh(it, 'p.info&&Text');
let url = pd(it, 'a&&href', HOST);
d.push({
title:title,
img:img,
content:content,
desc:desc,
url:url
})
}
});
setResult(d);
`,
}

335
js/py/aowuplugin/4K.py Executable file
View File

@ -0,0 +1,335 @@
import requests
from bs4 import BeautifulSoup
import re
from base.spider import Spider
import sys
import json
import base64
import urllib.parse
from Crypto.Cipher import ARC4
from Crypto.Util.Padding import unpad
import binascii
sys.path.append('..')
xurl = "https://www.fullhd.xxx/zh/"
headerx = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.87 Safari/537.36'
}
pm = ''
class Spider(Spider):
global xurl
global headerx
def getName(self):
return "首页"
def init(self, extend):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def extract_middle_text(self, text, start_str, end_str, pl, start_index1: str = '', end_index2: str = ''):
if pl == 3:
plx = []
while True:
start_index = text.find(start_str)
if start_index == -1:
break
end_index = text.find(end_str, start_index + len(start_str))
if end_index == -1:
break
middle_text = text[start_index + len(start_str):end_index]
plx.append(middle_text)
text = text.replace(start_str + middle_text + end_str, '')
if len(plx) > 0:
purl = ''
for i in range(len(plx)):
matches = re.findall(start_index1, plx[i])
output = ""
for match in matches:
match3 = re.search(r'(?:^|[^0-9])(\d+)(?:[^0-9]|$)', match[1])
if match3:
number = match3.group(1)
else:
number = 0
if 'http' not in match[0]:
output += f"#{'📽️' + match[1]}${number}{xurl}{match[0]}"
else:
output += f"#{'📽️' + match[1]}${number}{match[0]}"
output = output[1:]
purl = purl + output + "$$$"
purl = purl[:-3]
return purl
else:
return ""
else:
start_index = text.find(start_str)
if start_index == -1:
return ""
end_index = text.find(end_str, start_index + len(start_str))
if end_index == -1:
return ""
if pl == 0:
middle_text = text[start_index + len(start_str):end_index]
return middle_text.replace("\\", "")
if pl == 1:
middle_text = text[start_index + len(start_str):end_index]
matches = re.findall(start_index1, middle_text)
if matches:
jg = ' '.join(matches)
return jg
if pl == 2:
middle_text = text[start_index + len(start_str):end_index]
matches = re.findall(start_index1, middle_text)
if matches:
new_list = [f'{item}' for item in matches]
jg = '$$$'.join(new_list)
return jg
def homeContent(self, filter):
result = {}
result = {"class": [{"type_id": "latest-updates", "type_name": "最新视频🌠"},
{"type_id": "top-rated", "type_name": "最佳视频🌠"},
{"type_id": "most-popular", "type_name": "热门影片🌠"}],
}
return result
def homeVideoContent(self):
videos = []
try:
detail = requests.get(url=xurl, headers=headerx)
detail.encoding = "utf-8"
res = detail.text
doc = BeautifulSoup(res, "lxml")
# Get videos from different sections
sections = {
"latest-updates": "最新视频",
"top-rated": "最佳视频",
"most-popular": "热门影片"
}
for section_id, section_name in sections.items():
section = doc.find('div', id=f"list_videos_videos_watched_right_now_items")
if not section:
continue
vods = section.find_all('div', class_="item")
for vod in vods:
names = vod.find_all('a')
name = names[0]['title'] if names and 'title' in names[0].attrs else section_name
ids = vod.find_all('a')
id = ids[0]['href'] if ids else ""
pics = vod.find('img', class_="lazyload")
pic = pics['data-src'] if pics and 'data-src' in pics.attrs else ""
if pic and 'http' not in pic:
pic = xurl + pic
remarks = vod.find('span', class_="duration")
remark = remarks.text.strip() if remarks else ""
video = {
"vod_id": id,
"vod_name": name,
"vod_pic": pic,
"vod_remarks": remark
}
videos.append(video)
result = {'list': videos}
return result
except Exception as e:
print(f"Error in homeVideoContent: {str(e)}")
return {'list': []}
def categoryContent(self, cid, pg, filter, ext):
result = {}
videos = []
try:
if pg and int(pg) > 1:
url = f'{xurl}/{cid}/{pg}/'
else:
url = f'{xurl}/{cid}/'
detail = requests.get(url=url, headers=headerx)
detail.encoding = "utf-8"
res = detail.text
doc = BeautifulSoup(res, "lxml")
section = doc.find('div', class_="list-videos")
if section:
vods = section.find_all('div', class_="item")
for vod in vods:
names = vod.find_all('a')
name = names[0]['title'] if names and 'title' in names[0].attrs else ""
ids = vod.find_all('a')
id = ids[0]['href'] if ids else ""
pics = vod.find('img', class_="lazyload")
pic = pics['data-src'] if pics and 'data-src' in pics.attrs else ""
if pic and 'http' not in pic:
pic = xurl + pic
remarks = vod.find('span', class_="duration")
remark = remarks.text.strip() if remarks else ""
video = {
"vod_id": id,
"vod_name": name,
"vod_pic": pic,
"vod_remarks": remark
}
videos.append(video)
except Exception as e:
print(f"Error in categoryContent: {str(e)}")
result = {
'list': videos,
'page': pg,
'pagecount': 9999,
'limit': 90,
'total': 999999
}
return result
def detailContent(self, ids):
global pm
did = ids[0]
result = {}
videos = []
playurl = ''
if 'http' not in did:
did = xurl + did
res1 = requests.get(url=did, headers=headerx)
res1.encoding = "utf-8"
res = res1.text
content = '👉' + self.extract_middle_text(res,'<h1>','</h1>', 0)
yanuan = self.extract_middle_text(res, '<span>Pornstars:</span>','</div>',1, 'href=".*?">(.*?)</a>')
bofang = did
videos.append({
"vod_id": did,
"vod_actor": yanuan,
"vod_director": '',
"vod_content": content,
"vod_play_from": '💗4K💗',
"vod_play_url": bofang
})
result['list'] = videos
return result
def playerContent(self, flag, id, vipFlags):
parts = id.split("http")
xiutan = 0
if xiutan == 0:
if len(parts) > 1:
before_https, after_https = parts[0], 'http' + parts[1]
res = requests.get(url=after_https, headers=headerx)
res = res.text
url2 = self.extract_middle_text(res, '<video', '</video>', 0).replace('\\', '')
soup = BeautifulSoup(url2, 'html.parser')
first_source = soup.find('source')
src_value = first_source.get('src')
response = requests.head(src_value, allow_redirects=False)
if response.status_code == 302:
redirect_url = response.headers['Location']
response = requests.head(redirect_url, allow_redirects=False)
if response.status_code == 302:
redirect_url = response.headers['Location']
result = {}
result["parse"] = xiutan
result["playUrl"] = ''
result["url"] = redirect_url
result["header"] = headerx
return result
def searchContentPage(self, key, quick, page):
result = {}
videos = []
if not page:
page = '1'
if page == '1':
url = f'{xurl}/search/{key}/'
else:
url = f'{xurl}/search/{key}/{str(page)}/'
try:
detail = requests.get(url=url, headers=headerx)
detail.encoding = "utf-8"
res = detail.text
doc = BeautifulSoup(res, "lxml")
section = doc.find('div', class_="list-videos")
if section:
vods = section.find_all('div', class_="item")
for vod in vods:
names = vod.find_all('a')
name = names[0]['title'] if names and 'title' in names[0].attrs else ""
ids = vod.find_all('a')
id = ids[0]['href'] if ids else ""
pics = vod.find('img', class_="lazyload")
pic = pics['data-src'] if pics and 'data-src' in pics.attrs else ""
if pic and 'http' not in pic:
pic = xurl + pic
remarks = vod.find('span', class_="duration")
remark = remarks.text.strip() if remarks else ""
video = {
"vod_id": id,
"vod_name": name,
"vod_pic": pic,
"vod_remarks": remark
}
videos.append(video)
except Exception as e:
print(f"Error in searchContentPage: {str(e)}")
result = {
'list': videos,
'page': page,
'pagecount': 9999,
'limit': 90,
'total': 999999
}
return result
def searchContent(self, key, quick):
return self.searchContentPage(key, quick, '1')
def localProxy(self, params):
if params['type'] == "m3u8":
return self.proxyM3u8(params)
elif params['type'] == "media":
return self.proxyMedia(params)
elif params['type'] == "ts":
return self.proxyTs(params)
return None

155
js/py/aowuplugin/kzb.py Executable file
View File

@ -0,0 +1,155 @@
# -*- coding: utf-8 -*-
# @Author : Doubebly
# @Time : 2025/3/23 21:55
import base64
import sys
import time
import json
import requests
import re # 新增导入re模块
sys.path.append('..')
from base.spider import Spider
class Spider(Spider):
def getName(self):
return "Litv"
def init(self, extend):
self.extend = extend
try:
self.extendDict = json.loads(extend)
except:
self.extendDict = {}
proxy = self.extendDict.get('proxy', None)
if proxy is None:
self.is_proxy = False
else:
self.proxy = proxy
self.is_proxy = True
pass
def getDependence(self):
return []
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def natural_sort_key(self, s):
"""
自然排序辅助函数
"""
return [
int(part) if part.isdigit() else part.lower()
for part in re.split(r'(\d+)', s)
]
def liveContent(self, url):
# 初始化默认M3U内容至少包含EXTM3U声明
a = ['#EXTM3U']
try:
base_url = "https://kzb29rda.com/prod-api/iptv/getIptvList?liveType=0&deviceType=1"
response = requests.get(base_url)
response.raise_for_status() # 自动抛出HTTP错误如404/500
data = response.json()
sorted_list = sorted(
data.get('list', []),
key=lambda x: self.natural_sort_key(x.get("play_source_name", ""))
)
channels = [
element
#for item in data.get('list', [])
for item in sorted_list
for element in (
f'#EXTINF:-1 tvg-id="{item["play_source_name"]}" tvg-name="{item["play_source_name"]}" '
f'tvg-logo="https://logo.doube.eu.org/{item["play_source_name"]}.png" group-title="",'
f'{item["play_source_name"]}',
item['play_source_url']
)
]
a += channels # 合并到初始化的a中
except requests.exceptions.RequestException as e:
print(f"网络请求失败: {e}")
a.append('# 错误:无法获取频道列表')
except KeyError as e:
print(f"数据解析错误,缺少字段: {e}")
a.append('# 错误:数据格式异常')
except json.JSONDecodeError:
print("响应内容不是有效的JSON")
a.append('# 错误无效的API响应')
return '\n'.join(a)
def homeContent(self, filter):
return {}
def homeVideoContent(self):
return {}
def categoryContent(self, cid, page, filter, ext):
return {}
def detailContent(self, did):
return {}
def searchContent(self, key, quick, page='1'):
return {}
def searchContentPage(self, keywords, quick, page):
return {}
def playerContent(self, flag, pid, vipFlags):
return {}
def localProxy(self, params):
if params['type'] == "m3u8":
return self.proxyM3u8(params)
if params['type'] == "ts":
return self.get_ts(params)
return [302, "text/plain", None, {'Location': 'https://sf1-cdn-tos.huoshanstatic.com/obj/media-fe/xgplayer_doc_video/mp4/xgplayer-demo-720p.mp4'}]
def proxyM3u8(self, params):
pid = params['pid']
info = pid.split(',')
a = info[0]
b = info[1]
c = info[2]
timestamp = int(time.time() / 4 - 355017625)
t = timestamp * 4
m3u8_text = f'#EXTM3U\n#EXT-X-VERSION:3\n#EXT-X-TARGETDURATION:4\n#EXT-X-MEDIA-SEQUENCE:{timestamp}\n'
for i in range(10):
url = f'https://ntd-tgc.cdn.hinet.net/live/pool/{a}/litv-pc/{a}-avc1_6000000={b}-mp4a_134000_zho={c}-begin={t}0000000-dur=40000000-seq={timestamp}.ts'
if self.is_proxy:
url = f'http://127.0.0.1:9978/proxy?do=py&type=ts&url={self.b64encode(url)}'
m3u8_text += f'#EXTINF:4,\n{url}\n'
timestamp += 1
t += 4
return [200, "application/vnd.apple.mpegurl", m3u8_text]
def get_ts(self, params):
url = self.b64decode(params['url'])
headers = {'User-Agent': 'Mozilla/5.0'}
response = requests.get(url, headers=headers, stream=True, proxies=self.proxy)
return [206, "application/octet-stream", response.content]
def destroy(self):
return '正在Destroy'
def b64encode(self, data):
return base64.b64encode(data.encode('utf-8')).decode('utf-8')
def b64decode(self, data):
return base64.b64decode(data.encode('utf-8')).decode('utf-8')
if __name__ == '__main__':
pass

View File

@ -0,0 +1,346 @@
"""
作者 乐哥 🚓 内容均从互联网收集而来 仅供交流学习使用 版权归原创者所有 如侵犯了您的权益 请通知作者 将及时删除侵权内容
====================lege====================
"""
import requests
from bs4 import BeautifulSoup
import re
from base.spider import Spider
import sys
import json
import base64
import urllib.parse
from Crypto.Cipher import ARC4
from Crypto.Util.Padding import unpad
import binascii
sys.path.append('..')
xurl = "https://www.fullhd.xxx/zh/"
headerx = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.87 Safari/537.36'
}
pm = ''
class Spider(Spider):
global xurl
global headerx
def getName(self):
return "首页"
def init(self, extend):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def extract_middle_text(self, text, start_str, end_str, pl, start_index1: str = '', end_index2: str = ''):
if pl == 3:
plx = []
while True:
start_index = text.find(start_str)
if start_index == -1:
break
end_index = text.find(end_str, start_index + len(start_str))
if end_index == -1:
break
middle_text = text[start_index + len(start_str):end_index]
plx.append(middle_text)
text = text.replace(start_str + middle_text + end_str, '')
if len(plx) > 0:
purl = ''
for i in range(len(plx)):
matches = re.findall(start_index1, plx[i])
output = ""
for match in matches:
match3 = re.search(r'(?:^|[^0-9])(\d+)(?:[^0-9]|$)', match[1])
if match3:
number = match3.group(1)
else:
number = 0
if 'http' not in match[0]:
output += f"#{'📽️' + match[1]}${number}{xurl}{match[0]}"
else:
output += f"#{'📽️' + match[1]}${number}{match[0]}"
output = output[1:]
purl = purl + output + "$$$"
purl = purl[:-3]
return purl
else:
return ""
else:
start_index = text.find(start_str)
if start_index == -1:
return ""
end_index = text.find(end_str, start_index + len(start_str))
if end_index == -1:
return ""
if pl == 0:
middle_text = text[start_index + len(start_str):end_index]
return middle_text.replace("\\", "")
if pl == 1:
middle_text = text[start_index + len(start_str):end_index]
matches = re.findall(start_index1, middle_text)
if matches:
jg = ' '.join(matches)
return jg
if pl == 2:
middle_text = text[start_index + len(start_str):end_index]
matches = re.findall(start_index1, middle_text)
if matches:
new_list = [f'{item}' for item in matches]
jg = '$$$'.join(new_list)
return jg
def homeContent(self, filter):
result = {}
result = {"class": [{"type_id": "latest-updates", "type_name": "最新视频🌠"},
{"type_id": "top-rated", "type_name": "最佳视频🌠"},
{"type_id": "most-popular", "type_name": "热门影片🌠"}],
}
return result
def homeVideoContent(self):
videos = []
try:
detail = requests.get(url=xurl, headers=headerx)
detail.encoding = "utf-8"
res = detail.text
doc = BeautifulSoup(res, "lxml")
soups = doc.find_all('div', class_="margin-fix")
if soups and len(soups) > 1:
soups = soups[0]
vods = soups.find_all('div', class_="item")
for vod in vods:
names = vod.find_all('a')
name = names[0]['title']
ids = vod.find_all('a')
id = ids[0]['href']
pics = vod.find('img', class_="lazyload")
pic = pics['data-src']
if 'http' not in pic:
pic = xurl + pic
remarks = vod.find('div', class_="img thumb__img")
remark = remarks.text.strip()
video = {
"vod_id": id,
"vod_name": name,
"vod_pic": pic,
"vod_remarks": remark
}
videos.append(video)
result = {'list': videos}
return result
except:
pass
def categoryContent(self, cid, pg, filter, ext):
result = {}
if pg:
page = int(pg)
else:
page = 1
page = int(pg)
videos = []
if page == '1':
url = f'{xurl}/{cid}/'
else:
url = f'{xurl}/{cid}/{str(page)}/'
try:
detail = requests.get(url=url, headers=headerx)
detail.encoding = "utf-8"
res = detail.text
doc = BeautifulSoup(res, "lxml")
soups = doc.find_all('div', class_="margin-fix")
for soup in soups:
vods = soup.find_all('div', class_="item")
for vod in vods:
names = vod.find_all('a')
name = names[0]['title']
ids = vod.find_all('a')
id = ids[0]['href']
pics = vod.find('img', class_="lazyload")
pic = pics['data-src']
if 'http' not in pic:
pic = xurl + pic
remarks = vod.find('div', class_="img thumb__img")
remark = remarks.text.strip()
video = {
"vod_id": id,
"vod_name": name,
"vod_pic": pic,
"vod_remarks": remark
}
videos.append(video)
except:
pass
result = {'list': videos}
result['page'] = pg
result['pagecount'] = 9999
result['limit'] = 90
result['total'] = 999999
return result
def detailContent(self, ids):
global pm
did = ids[0]
result = {}
videos = []
playurl = ''
if 'http' not in did:
did = xurl + did
res1 = requests.get(url=did, headers=headerx)
res1.encoding = "utf-8"
res = res1.text
content = '资源来源于网络🚓侵权请联系删除👉' + self.extract_middle_text(res,'<h1>','</h1>', 0)
yanuan = self.extract_middle_text(res, '<span>Pornstars:</span>','</div>',1, 'href=".*?">(.*?)</a>')
bofang = did
videos.append({
"vod_id": did,
"vod_actor": yanuan,
"vod_director": '',
"vod_content": content,
"vod_play_from": '💗数逼毛💗',
"vod_play_url": bofang
})
result['list'] = videos
return result
def playerContent(self, flag, id, vipFlags):
parts = id.split("http")
xiutan = 0
if xiutan == 0:
if len(parts) > 1:
before_https, after_https = parts[0], 'http' + parts[1]
res = requests.get(url=after_https, headers=headerx)
res = res.text
url2 = self.extract_middle_text(res, '<video', '</video>', 0).replace('\\', '')
soup = BeautifulSoup(url2, 'html.parser')
first_source = soup.find('source')
src_value = first_source.get('src')
response = requests.head(src_value, allow_redirects=False)
if response.status_code == 302:
redirect_url = response.headers['Location']
response = requests.head(redirect_url, allow_redirects=False)
if response.status_code == 302:
redirect_url = response.headers['Location']
result = {}
result["parse"] = xiutan
result["playUrl"] = ''
result["url"] = redirect_url
result["header"] = headerx
return result
def searchContentPage(self, key, quick, page):
result = {}
videos = []
if not page:
page = '1'
if page == '1':
url = f'{xurl}/search/{key}/'
else:
url = f'{xurl}/search/{key}/{str(page)}/'
detail = requests.get(url=url, headers=headerx)
detail.encoding = "utf-8"
res = detail.text
doc = BeautifulSoup(res, "lxml")
soups = doc.find_all('div', class_="margin-fix")
for soup in soups:
vods = soup.find_all('div', class_="item")
for vod in vods:
names = vod.find_all('a')
name = names[0]['title']
ids = vod.find_all('a')
id = ids[0]['href']
pics = vod.find('img', class_="lazyload")
pic = pics['data-src']
if 'http' not in pic:
pic = xurl + pic
remarks = vod.find('div', class_="img thumb__img")
remark = remarks.text.strip()
video = {
"vod_id": id,
"vod_name": name,
"vod_pic": pic,
"vod_remarks": remark
}
videos.append(video)
result['list'] = videos
result['page'] = page
result['pagecount'] = 9999
result['limit'] = 90
result['total'] = 999999
return result
def searchContent(self, key, quick):
return self.searchContentPage(key, quick, '1')
def localProxy(self, params):
if params['type'] == "m3u8":
return self.proxyM3u8(params)
elif params['type'] == "media":
return self.proxyMedia(params)
elif params['type'] == "ts":
return self.proxyTs(params)
return None

276
js/py/aowuplugin/py_Phub.py Executable file
View File

@ -0,0 +1,276 @@
# coding=utf-8
# !/usr/bin/python
# by嗷呜
import json
import re
import sys
from pyquery import PyQuery as pq
from base64 import b64decode, b64encode
from requests import Session
sys.path.append('..')
from base.spider import Spider
class Spider(Spider):
# 定义代理配置
proxies = {
'http': 'http://127.0.0.1:10172',
'https': 'http://127.0.0.1:10172'
}
def init(self, extend=""):
self.host = self.gethost()
self.headers['referer'] = f'{self.host}/'
# 初始化 Session 并设置代理
self.session = Session()
self.session.headers.update(self.headers)
self.session.proxies.update(self.proxies) # 添加代理到 session
pass
def getName(self):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def destroy(self):
pass
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'sec-ch-ua': '"Not(A:Brand";v="99", "Google Chrome";v="133", "Chromium";v="133"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-full-version': '"133.0.6943.98"',
'sec-ch-ua-arch': '"x86"',
'sec-ch-ua-platform': '"Windows"',
'sec-ch-ua-platform-version': '"19.0.0"',
'sec-ch-ua-model': '""',
'sec-ch-ua-full-version-list': '"Not(A:Brand";v="99.0.0.0", "Google Chrome";v="133.0.6943.98", "Chromium";v="133.0.6943.98"',
'dnt': '1',
'upgrade-insecure-requests': '1',
'sec-fetch-site': 'none',
'sec-fetch-mode': 'navigate',
'sec-fetch-user': '?1',
'sec-fetch-dest': 'document',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8',
'priority': 'u=0, i'
}
def homeContent(self, filter):
result = {}
cateManual = {
"视频": "/video",
"片单": "/playlists",
"频道": "/channels",
"分类": "/categories",
"明星": "/pornstars"
}
classes = []
filters = {}
for k in cateManual:
classes.append({
'type_name': k,
'type_id': cateManual[k]
})
result['class'] = classes
result['filters'] = filters
return result
def homeVideoContent(self):
data = self.getpq('/recommended')
vhtml = data("#recommendedListings .pcVideoListItem .phimage")
return {'list': self.getlist(vhtml)}
def categoryContent(self, tid, pg, filter, extend):
vdata = []
result = {}
result['page'] = pg
result['pagecount'] = 9999
result['limit'] = 90
result['total'] = 999999
if tid == '/video' or '_this_video' in tid:
pagestr = f'&' if '?' in tid else f'?'
tid = tid.split('_this_video')[0]
data = self.getpq(f'{tid}{pagestr}page={pg}')
vdata = self.getlist(data('#videoCategory .pcVideoListItem'))
elif tid == '/playlists':
data = self.getpq(f'{tid}?page={pg}')
vhtml = data('#playListSection li')
vdata = []
for i in vhtml.items():
vdata.append({
'vod_id': 'playlists_click_' + i('.thumbnail-info-wrapper .display-block a').attr('href'),
'vod_name': i('.thumbnail-info-wrapper .display-block a').attr('title'),
'vod_pic': i('.largeThumb').attr('src'),
'vod_tag': 'folder',
'vod_remarks': i('.playlist-videos .number').text(),
'style': {"type": "rect", "ratio": 1.33}
})
elif tid == '/channels':
data = self.getpq(f'{tid}?o=rk&page={pg}')
vhtml = data('#filterChannelsSection li .description')
vdata = []
for i in vhtml.items():
vdata.append({
'vod_id': 'director_click_' + i('.avatar a').attr('href'),
'vod_name': i('.avatar img').attr('alt'),
'vod_pic': i('.avatar img').attr('src'),
'vod_tag': 'folder',
'vod_remarks': i('.descriptionContainer ul li').eq(-1).text(),
'style': {"type": "rect", "ratio": 1.33}
})
elif tid == '/categories' and pg == '1':
result['pagecount'] = 1
data = self.getpq(f'{tid}')
vhtml = data('.categoriesListSection li .relativeWrapper')
vdata = []
for i in vhtml.items():
vdata.append({
'vod_id': i('a').attr('href') + '_this_video',
'vod_name': i('a').attr('alt'),
'vod_pic': i('a img').attr('src'),
'vod_tag': 'folder',
'style': {"type": "rect", "ratio": 1.33}
})
elif tid == '/pornstars':
data = self.getpq(f'{tid}?o=t&page={pg}')
vhtml = data('#popularPornstars .performerCard .wrap')
vdata = []
for i in vhtml.items():
vdata.append({
'vod_id': 'pornstars_click_' + i('a').attr('href'),
'vod_name': i('.performerCardName').text(),
'vod_pic': i('a img').attr('src'),
'vod_tag': 'folder',
'vod_year': i('.performerVideosViewsCount span').eq(0).text(),
'vod_remarks': i('.performerVideosViewsCount span').eq(-1).text(),
'style': {"type": "rect", "ratio": 1.33}
})
elif 'playlists_click' in tid:
tid = tid.split('click_')[-1]
if pg == '1':
hdata = self.getpq(tid)
self.token = hdata('#searchInput').attr('data-token')
vdata = self.getlist(hdata('#videoPlaylist .pcVideoListItem .phimage'))
else:
tid = tid.split('playlist/')[-1]
data = self.getpq(f'/playlist/viewChunked?id={tid}&token={self.token}&page={pg}')
vdata = self.getlist(data('.pcVideoListItem .phimage'))
elif 'director_click' in tid:
tid = tid.split('click_')[-1]
data = self.getpq(f'{tid}/videos?page={pg}')
vdata = self.getlist(data('#showAllChanelVideos .pcVideoListItem .phimage'))
elif 'pornstars_click' in tid:
tid = tid.split('click_')[-1]
data = self.getpq(f'{tid}/videos?page={pg}')
vdata = self.getlist(data('#mostRecentVideosSection .pcVideoListItem .phimage'))
result['list'] = vdata
return result
def detailContent(self, ids):
url = f"{self.host}{ids[0]}"
data = self.getpq(ids[0])
vn = data('meta[property="og:title"]').attr('content')
dtext = data('.userInfo .usernameWrap a')
pdtitle = '[a=cr:' + json.dumps({'id': 'director_click_' + dtext.attr('href'), 'name': dtext.text()}) + '/]' + dtext.text() + '[/a]'
vod = {
'vod_name': vn,
'vod_director': pdtitle,
'vod_remarks': (data('.userInfo').text() + ' / ' + data('.ratingInfo').text()).replace('\n', ' / '),
'vod_play_from': 'Pornhub',
'vod_play_url': ''
}
js_content = data("#player script").eq(0).text()
plist = [f"{vn}${self.e64(f'{1}@@@@{url}')}"]
try:
pattern = r'"mediaDefinitions":\s*(\[.*?\]),\s*"isVertical"'
match = re.search(pattern, js_content, re.DOTALL)
if match:
json_str = match.group(1)
udata = json.loads(json_str)
plist = [
f"{media['height']}${self.e64(f'{0}@@@@{url}')}"
for media in udata[:-1]
if (url := media.get('videoUrl'))
]
except Exception as e:
print(f"提取mediaDefinitions失败: {str(e)}")
vod['vod_play_url'] = '#'.join(plist)
return {'list': [vod]}
def searchContent(self, key, quick, pg="1"):
data = self.getpq(f'/video/search?search={key}&page={pg}')
return {'list': self.getlist(data('#videoSearchResult .pcVideoListItem .phimage'))}
def playerContent(self, flag, id, vipFlags):
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.5410.0 Safari/537.36',
'pragma': 'no-cache',
'cache-control': 'no-cache',
'sec-ch-ua-platform': '"Windows"',
'sec-ch-ua': '"Not(A:Brand";v="99", "Google Chrome";v="133", "Chromium";v="133"',
'dnt': '1',
'sec-ch-ua-mobile': '?0',
'origin': self.host,
'sec-fetch-site': 'cross-site',
'sec-fetch-mode': 'cors',
'sec-fetch-dest': 'empty',
'referer': f'{self.host}/',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8',
'priority': 'u=1, i',
}
ids = self.d64(id).split('@@@@')
return {'parse': int(ids[0]), 'url': ids[1], 'header': headers}
def localProxy(self, param):
pass
def gethost(self):
try:
# 在获取 host 时也使用代理
response = self.session.get('https://www.pornhub.com', headers=self.headers, allow_redirects=False)
return response.headers['Location'][:-1]
except Exception as e:
print(f"获取主页失败: {str(e)}")
return "https://www.pornhub.com"
def e64(self, text):
try:
text_bytes = text.encode('utf-8')
encoded_bytes = b64encode(text_bytes)
return encoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64编码错误: {str(e)}")
return ""
def d64(self, encoded_text):
try:
encoded_bytes = encoded_text.encode('utf-8')
decoded_bytes = b64decode(encoded_bytes)
return decoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64解码错误: {str(e)}")
return ""
def getlist(self, data):
vlist = []
for i in data.items():
vlist.append({
'vod_id': i('a').attr('href'),
'vod_name': i('a').attr('title'),
'vod_pic': i('img').attr('src'),
'vod_remarks': i('.bgShadeEffect').text() or i('.duration').text(),
'style': {'ratio': 1.33, 'type': 'rect'}
})
return vlist
def getpq(self, path):
try:
response = self.session.get(f'{self.host}{path}').text
return pq(response.encode('utf-8'))
except Exception as e:
print(f"请求失败: , {str(e)}")
return None

271
js/py/aowuplugin/py_Xhm(1).py Executable file
View File

@ -0,0 +1,271 @@
# coding=utf-8
# !/usr/bin/python
# by嗷呜
import json
import sys
from base64 import b64decode, b64encode
from pyquery import PyQuery as pq
from requests import Session
sys.path.append('..')
from base.spider import Spider
class Spider(Spider):
def init(self, extend=""):
self.host = self.gethost()
self.headers['referer'] = f'{self.host}/'
self.session = Session()
self.session.headers.update(self.headers)
pass
def getName(self):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def destroy(self):
pass
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'sec-ch-ua': '"Not(A:Brand";v="99", "Google Chrome";v="133", "Chromium";v="133"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-full-version': '"133.0.6943.98"',
'sec-ch-ua-arch': '"x86"',
'sec-ch-ua-platform': '"Windows"',
'sec-ch-ua-platform-version': '"19.0.0"',
'sec-ch-ua-model': '""',
'sec-ch-ua-full-version-list': '"Not(A:Brand";v="99.0.0.0", "Google Chrome";v="133.0.6943.98", "Chromium";v="133.0.6943.98"',
'dnt': '1',
'upgrade-insecure-requests': '1',
'sec-fetch-site': 'none',
'sec-fetch-mode': 'navigate',
'sec-fetch-user': '?1',
'sec-fetch-dest': 'document',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8',
'priority': 'u=0, i'
}
def homeContent(self, filter):
result = {}
cateManual = {
"4K": "/4k",
"国产": "two_click_/categories/chinese",
"最新": "/newest",
"最佳": "/best",
"频道": "/channels",
"类别": "/categories",
"明星": "/pornstars"
}
classes = []
filters = {}
for k in cateManual:
classes.append({
'type_name': k,
'type_id': cateManual[k]
})
if k !='4K':filters[cateManual[k]]=[{'key':'type','name':'类型','value':[{'n':'4K','v':'/4k'}]}]
result['class'] = classes
result['filters'] = filters
return result
def homeVideoContent(self):
data = self.getpq()
return {'list': self.getlist(data(".thumb-list--sidebar .thumb-list__item"))}
def categoryContent(self, tid, pg, filter, extend):
vdata = []
result = {}
result['page'] = pg
result['pagecount'] = 9999
result['limit'] = 90
result['total'] = 999999
if tid in ['/4k', '/newest', '/best'] or 'two_click_' in tid:
if 'two_click_' in tid: tid = tid.split('click_')[-1]
data = self.getpq(f'{tid}{extend.get("type","")}/{pg}')
vdata = self.getlist(data(".thumb-list--sidebar .thumb-list__item"))
elif tid == '/channels':
data = self.getpq(f'{tid}/{pg}')
jsdata = self.getjsdata(data)
for i in jsdata['channels']:
vdata.append({
'vod_id': f"two_click_" + i.get('channelURL'),
'vod_name': i.get('channelName'),
'vod_pic': i.get('siteLogoURL'),
'vod_year': f'videos:{i.get("videoCount")}',
'vod_tag': 'folder',
'vod_remarks': f'subscribers:{i["subscriptionModel"].get("subscribers")}',
'style': {'ratio': 1.33, 'type': 'rect'}
})
elif tid == '/categories':
result['pagecount'] = pg
data = self.getpq(tid)
self.cdata = self.getjsdata(data)
for i in self.cdata['layoutPage']['store']['popular']['assignable']:
vdata.append({
'vod_id': "one_click_" + i.get('id'),
'vod_name': i.get('name'),
'vod_pic': '',
'vod_tag': 'folder',
'style': {'ratio': 1.33, 'type': 'rect'}
})
elif tid == '/pornstars':
data = self.getpq(f'{tid}/{pg}')
pdata = self.getjsdata(data)
for i in pdata['pagesPornstarsComponent']['pornstarListProps']['pornstars']:
vdata.append({
'vod_id': f"two_click_" + i.get('pageURL'),
'vod_name': i.get('name'),
'vod_pic': i.get('imageThumbUrl'),
'vod_remarks': i.get('translatedCountryName'),
'vod_tag': 'folder',
'style': {'ratio': 1.33, 'type': 'rect'}
})
elif 'one_click' in tid:
result['pagecount'] = pg
tid = tid.split('click_')[-1]
for i in self.cdata['layoutPage']['store']['popular']['assignable']:
if i.get('id') == tid:
for j in i['items']:
vdata.append({
'vod_id': f"two_click_" + j.get('url'),
'vod_name': j.get('name'),
'vod_pic': j.get('thumb'),
'vod_tag': 'folder',
'style': {'ratio': 1.33, 'type': 'rect'}
})
result['list'] = vdata
return result
def detailContent(self, ids):
data = self.getpq(ids[0])
djs = self.getjsdata(data)
vn = data('meta[property="og:title"]').attr('content')
dtext = data('#video-tags-list-container')
href = dtext('a').attr('href')
title = dtext('span[class*="body-bold-"]').eq(0).text()
pdtitle = ''
if href:
pdtitle = '[a=cr:' + json.dumps({'id': 'two_click_' + href, 'name': title}) + '/]' + title + '[/a]'
vod = {
'vod_name': vn,
'vod_director': pdtitle,
'vod_remarks': data('.rb-new__info').text(),
'vod_play_from': 'Xhamster',
'vod_play_url': ''
}
try:
plist = []
d = djs['xplayerSettings']['sources']
f = d.get('standard')
def custom_sort_key(url):
quality = url.split('$')[0]
number = ''.join(filter(str.isdigit, quality))
number = int(number) if number else 0
return -number, quality
if f:
for key, value in f.items():
if isinstance(value, list):
for info in value:
id = self.e64(f'{0}@@@@{info.get("url") or info.get("fallback")}')
plist.append(f"{info.get('label') or info.get('quality')}${id}")
plist.sort(key=custom_sort_key)
if d.get('hls'):
for format_type, info in d['hls'].items():
if url := info.get('url'):
encoded = self.e64(f'{0}@@@@{url}')
plist.append(f"{format_type}${encoded}")
except Exception as e:
plist = [f"{vn}${self.e64(f'{1}@@@@{ids[0]}')}"]
print(f"获取视频信息失败: {str(e)}")
vod['vod_play_url'] = '#'.join(plist)
return {'list': [vod]}
def searchContent(self, key, quick, pg="1"):
data = self.getpq(f'/search/{key}?page={pg}')
return {'list': self.getlist(data(".thumb-list--sidebar .thumb-list__item")), 'page': pg}
def playerContent(self, flag, id, vipFlags):
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.5410.0 Safari/537.36',
'pragma': 'no-cache',
'cache-control': 'no-cache',
'sec-ch-ua-platform': '"Windows"',
'sec-ch-ua': '"Not(A:Brand";v="99", "Google Chrome";v="133", "Chromium";v="133"',
'dnt': '1',
'sec-ch-ua-mobile': '?0',
'origin': self.host,
'sec-fetch-site': 'cross-site',
'sec-fetch-mode': 'cors',
'sec-fetch-dest': 'empty',
'referer': f'{self.host}/',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8',
'priority': 'u=1, i',
}
ids = self.d64(id).split('@@@@')
return {'parse': int(ids[0]), 'url': ids[1], 'header': headers}
def localProxy(self, param):
pass
def gethost(self):
try:
response = self.fetch('https://xhamster.com', headers=self.headers, allow_redirects=False)
return response.headers['Location']
except Exception as e:
print(f"获取主页失败: {str(e)}")
return "https://zn.xhamster.com"
def e64(self, text):
try:
text_bytes = text.encode('utf-8')
encoded_bytes = b64encode(text_bytes)
return encoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64编码错误: {str(e)}")
return ""
def d64(self, encoded_text):
try:
encoded_bytes = encoded_text.encode('utf-8')
decoded_bytes = b64decode(encoded_bytes)
return decoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64解码错误: {str(e)}")
return ""
def getlist(self, data):
vlist = []
for i in data.items():
vlist.append({
'vod_id': i('.role-pop').attr('href'),
'vod_name': i('.video-thumb-info a').text(),
'vod_pic': i('.role-pop img').attr('src'),
'vod_year': i('.video-thumb-info .video-thumb-views').text().split(' ')[0],
'vod_remarks': i('.role-pop div[data-role="video-duration"]').text(),
'style': {'ratio': 1.33, 'type': 'rect'}
})
return vlist
def getpq(self, path=''):
h = '' if path.startswith('http') else self.host
response = self.session.get(f'{h}{path}').text
try:
return pq(response)
except Exception as e:
print(f"{str(e)}")
return pq(response.encode('utf-8'))
def getjsdata(self, data):
vhtml = data("script[id='initials-script']").text()
jst = json.loads(vhtml.split('initials=')[-1][:-1])
return jst

263
js/py/aowuplugin/py_Xhm.py Executable file
View File

@ -0,0 +1,263 @@
# coding=utf-8
# !/usr/bin/python
# by嗷呜
import json
import sys
from base64 import b64decode, b64encode
from pyquery import PyQuery as pq
from requests import Session
sys.path.append('..')
from base.spider import Spider
class Spider(Spider):
def init(self, extend=""):
self.host = self.gethost()
self.headers['referer'] = f'{self.host}/'
self.session = Session()
self.session.headers.update(self.headers)
pass
def getName(self):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def destroy(self):
pass
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'sec-ch-ua': '"Not(A:Brand";v="99", "Google Chrome";v="133", "Chromium";v="133"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-full-version': '"133.0.6943.98"',
'sec-ch-ua-arch': '"x86"',
'sec-ch-ua-platform': '"Windows"',
'sec-ch-ua-platform-version': '"19.0.0"',
'sec-ch-ua-model': '""',
'sec-ch-ua-full-version-list': '"Not(A:Brand";v="99.0.0.0", "Google Chrome";v="133.0.6943.98", "Chromium";v="133.0.6943.98"',
'dnt': '1',
'upgrade-insecure-requests': '1',
'sec-fetch-site': 'none',
'sec-fetch-mode': 'navigate',
'sec-fetch-user': '?1',
'sec-fetch-dest': 'document',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8',
'priority': 'u=0, i'
}
def homeContent(self, filter):
result = {}
cateManual = {
"4K": "/4k",
"国产": "two_click_/categories/chinese",
"最新": "/newest",
"最佳": "/best",
"频道": "/channels",
"类别": "/categories",
"明星": "/pornstars"
}
classes = []
filters = {}
for k in cateManual:
classes.append({
'type_name': k,
'type_id': cateManual[k]
})
if k !='4K':filters[cateManual[k]]=[{'key':'type','name':'类型','value':[{'n':'4K','v':'/4k'}]}]
result['class'] = classes
result['filters'] = filters
return result
def homeVideoContent(self):
data = self.getpq()
return {'list': self.getlist(data(".thumb-list--sidebar .thumb-list__item"))}
def categoryContent(self, tid, pg, filter, extend):
vdata = []
result = {}
result['page'] = pg
result['pagecount'] = 9999
result['limit'] = 90
result['total'] = 999999
if tid in ['/4k', '/newest', '/best'] or 'two_click_' in tid:
if 'two_click_' in tid: tid = tid.split('click_')[-1]
data = self.getpq(f'{tid}{extend.get("type","")}/{pg}')
vdata = self.getlist(data(".thumb-list--sidebar .thumb-list__item"))
elif tid == '/channels':
data = self.getpq(f'{tid}/{pg}')
jsdata = self.getjsdata(data)
for i in jsdata['channels']:
vdata.append({
'vod_id': f"two_click_" + i.get('channelURL'),
'vod_name': i.get('channelName'),
'vod_pic': i.get('siteLogoURL'),
'vod_year': f'videos:{i.get("videoCount")}',
'vod_tag': 'folder',
'vod_remarks': f'subscribers:{i["subscriptionModel"].get("subscribers")}',
'style': {'ratio': 1.33, 'type': 'rect'}
})
elif tid == '/categories':
result['pagecount'] = pg
data = self.getpq(tid)
self.cdata = self.getjsdata(data)
for i in self.cdata['layoutPage']['store']['popular']['assignable']:
vdata.append({
'vod_id': "one_click_" + i.get('id'),
'vod_name': i.get('name'),
'vod_pic': '',
'vod_tag': 'folder',
'style': {'ratio': 1.33, 'type': 'rect'}
})
elif tid == '/pornstars':
data = self.getpq(f'{tid}/{pg}')
pdata = self.getjsdata(data)
for i in pdata['pagesPornstarsComponent']['pornstarListProps']['pornstars']:
vdata.append({
'vod_id': f"two_click_" + i.get('pageURL'),
'vod_name': i.get('name'),
'vod_pic': i.get('imageThumbUrl'),
'vod_remarks': i.get('translatedCountryName'),
'vod_tag': 'folder',
'style': {'ratio': 1.33, 'type': 'rect'}
})
elif 'one_click' in tid:
result['pagecount'] = pg
tid = tid.split('click_')[-1]
for i in self.cdata['layoutPage']['store']['popular']['assignable']:
if i.get('id') == tid:
for j in i['items']:
vdata.append({
'vod_id': f"two_click_" + j.get('url'),
'vod_name': j.get('name'),
'vod_pic': j.get('thumb'),
'vod_tag': 'folder',
'style': {'ratio': 1.33, 'type': 'rect'}
})
result['list'] = vdata
return result
def detailContent(self, ids):
data = self.getpq(ids[0])
djs = self.getjsdata(data)
vn = data('meta[property="og:title"]').attr('content')
dtext = data('#video-tags-list-container')
href = dtext('a').attr('href')
title = dtext('span[class*="body-bold-"]').eq(0).text()
pdtitle = ''
if href:
pdtitle = '[a=cr:' + json.dumps({'id': 'two_click_' + href, 'name': title}) + '/]' + title + '[/a]'
vod = {
'vod_name': vn,
'vod_director': pdtitle,
'vod_remarks': data('.rb-new__info').text(),
'vod_play_from': 'Xhamster',
'vod_play_url': ''
}
try:
plist = []
d = djs['xplayerSettings']['sources']
f = d.get('standard')
if d.get('hls'):
for format_type, info in d['hls'].items():
if url := info.get('url'):
encoded = self.e64(f'{0}@@@@{url}')
plist.append(f"{format_type}${encoded}")
if f:
for key, value in f.items():
if isinstance(value, list):
for info in value:
id = self.e64(f'{0}@@@@{info.get("url") or info.get("fallback")}')
plist.append(f"{info.get('label') or info.get('quality')}${id}")
except Exception as e:
plist = [f"{vn}${self.e64(f'{1}@@@@{ids[0]}')}"]
print(f"获取视频信息失败: {str(e)}")
vod['vod_play_url'] = '#'.join(plist)
return {'list': [vod]}
def searchContent(self, key, quick, pg="1"):
data = self.getpq(f'/search/{key}?page={pg}')
return {'list': self.getlist(data(".thumb-list--sidebar .thumb-list__item")), 'page': pg}
def playerContent(self, flag, id, vipFlags):
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.5410.0 Safari/537.36',
'pragma': 'no-cache',
'cache-control': 'no-cache',
'sec-ch-ua-platform': '"Windows"',
'sec-ch-ua': '"Not(A:Brand";v="99", "Google Chrome";v="133", "Chromium";v="133"',
'dnt': '1',
'sec-ch-ua-mobile': '?0',
'origin': self.host,
'sec-fetch-site': 'cross-site',
'sec-fetch-mode': 'cors',
'sec-fetch-dest': 'empty',
'referer': f'{self.host}/',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8',
'priority': 'u=1, i',
}
ids = self.d64(id).split('@@@@')
return {'parse': int(ids[0]), 'url': ids[1], 'header': headers}
def localProxy(self, param):
pass
def gethost(self):
try:
response = self.fetch('https://xhamster.com', headers=self.headers, allow_redirects=False)
return response.headers['Location']
except Exception as e:
print(f"获取主页失败: {str(e)}")
return "https://zn.xhamster.com"
def e64(self, text):
try:
text_bytes = text.encode('utf-8')
encoded_bytes = b64encode(text_bytes)
return encoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64编码错误: {str(e)}")
return ""
def d64(self, encoded_text):
try:
encoded_bytes = encoded_text.encode('utf-8')
decoded_bytes = b64decode(encoded_bytes)
return decoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64解码错误: {str(e)}")
return ""
def getlist(self, data):
vlist = []
for i in data.items():
vlist.append({
'vod_id': i('.role-pop').attr('href'),
'vod_name': i('.video-thumb-info a').text(),
'vod_pic': i('.role-pop img').attr('src'),
'vod_year': i('.video-thumb-info .video-thumb-views').text().split(' ')[0],
'vod_remarks': i('.role-pop div[data-role="video-duration"]').text(),
'style': {'ratio': 1.33, 'type': 'rect'}
})
return vlist
def getpq(self, path=''):
h = '' if path.startswith('http') else self.host
response = self.session.get(f'{h}{path}').text
try:
return pq(response)
except Exception as e:
print(f"{str(e)}")
return pq(response.encode('utf-8'))
def getjsdata(self, data):
vhtml = data("script[id='initials-script']").text()
jst = json.loads(vhtml.split('initials=')[-1][:-1])
return jst

731
js/py/aowuplugin/py_bilibilivd.py Executable file
View File

@ -0,0 +1,731 @@
#coding=utf-8
#!/usr/bin/python
import re
import sys
import json
import time
from datetime import datetime
from urllib.parse import quote, unquote
import requests
sys.path.append('..')
from base.spider import Spider
class Spider(Spider): # 元类 默认的元类 type
def getName(self):
return "B站视频"
def init(self, extend):
try:
self.extendDict = json.loads(extend)
except:
self.extendDict = {}
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def homeContent(self, filter):
result = {}
result['filters'] = {}
cookie = ''
if 'cookie' in self.extendDict:
cookie = self.extendDict['cookie']
if 'json' in self.extendDict:
r = self.fetch(self.extendDict['json'], timeout=10)
if 'cookie' in r.json():
cookie = r.json()['cookie']
if cookie == '':
cookie = '{}'
elif type(cookie) == str and cookie.startswith('http'):
cookie = self.fetch(cookie, timeout=10).text.strip()
try:
if type(cookie) == dict:
cookie = json.dumps(cookie, ensure_ascii=False)
except:
pass
_, _, _ = self.getCookie(cookie)
bblogin = self.getCache('bblogin')
if bblogin:
result['class'] = []
else:
result['class'] = []
if 'json' in self.extendDict:
r = self.fetch(self.extendDict['json'], timeout=10)
params = r.json()
if 'classes' in params:
result['class'] = result['class'] + params['classes']
if filter:
if 'filter' in params:
result['filters'] = params['filter']
elif 'categories' in self.extendDict or 'type' in self.extendDict:
if 'categories' in self.extendDict:
cateList = self.extendDict['categories'].split('#')
else:
cateList = self.extendDict['type'].split('#')
for cate in cateList:
result['class'].append({'type_name': cate, 'type_id': cate})
if not 'class' in result or result['class'] == []:
result['class'] = [{"type_name": "沙雕动漫", "type_id": "沙雕动漫"}]
return result
def homeVideoContent(self):
result = {}
cookie = ''
if 'cookie' in self.extendDict:
cookie = self.extendDict['cookie']
if 'json' in self.extendDict:
r = self.fetch(self.extendDict['json'], timeout=10)
if 'cookie' in r.json():
cookie = r.json()['cookie']
if cookie == '':
cookie = '{}'
elif type(cookie) == str and cookie.startswith('http'):
cookie = self.fetch(cookie, timeout=10).text.strip()
try:
if type(cookie) == dict:
cookie = json.dumps(cookie, ensure_ascii=False)
except:
pass
cookie, imgKey, subKey = self.getCookie(cookie)
url = 'https://api.bilibili.com/x/web-interface/index/top/feed/rcmd?y_num=1&fresh_type=3&feed_version=SEO_VIDEO&fresh_idx_1h=1&fetch_row=1&fresh_idx=1&brush=0&homepage_ver=1&ps=20'
r = requests.get(url, cookies=cookie, headers=self.header, timeout=5)
data = json.loads(self.cleanText(r.text))
try:
result['list'] = []
vodList = data['data']['item']
for vod in vodList:
aid = str(vod['id']).strip()
title = self.removeHtmlTags(vod['title']).strip()
img = vod['pic'].strip()
remark = time.strftime('%H:%M:%S', time.gmtime(vod['duration']))
if remark.startswith('00:'):
remark = remark[3:]
if remark == '00:00':
continue
result['list'].append({
'vod_id': aid,
'vod_name': title,
'vod_pic': img,
'vod_remarks': remark
})
except:
pass
return result
def categoryContent(self, cid, page, filter, ext):
page = int(page)
result = {}
videos = []
cookie = ''
pagecount = page
if 'cookie' in self.extendDict:
cookie = self.extendDict['cookie']
if 'json' in self.extendDict:
r = self.fetch(self.extendDict['json'], timeout=10)
if 'cookie' in r.json():
cookie = r.json()['cookie']
if cookie == '':
cookie = '{}'
elif type(cookie) == str and cookie.startswith('http'):
cookie = self.fetch(cookie, timeout=10).text.strip()
try:
if type(cookie) == dict:
cookie = json.dumps(cookie, ensure_ascii=False)
except:
pass
cookie, imgKey, subKey = self.getCookie(cookie)
if cid == '动态':
if page > 1:
offset = self.getCache('offset')
if not offset:
offset = ''
url = f'https://api.bilibili.com/x/polymer/web-dynamic/v1/feed/all?timezone_offset=-480&type=all&offset={offset}&page={page}'
else:
url = f'https://api.bilibili.com/x/polymer/web-dynamic/v1/feed/all?timezone_offset=-480&type=all&page={page}'
r = self.fetch(url, cookies=cookie, headers=self.header, timeout=5)
data = json.loads(self.cleanText(r.text))
self.setCache('offset', data['data']['offset'])
vodList = data['data']['items']
if data['data']['has_more']:
pagecount = page + 1
for vod in vodList:
if vod['type'] != 'DYNAMIC_TYPE_AV':
continue
vid = str(vod['modules']['module_dynamic']['major']['archive']['aid']).strip()
remark = vod['modules']['module_dynamic']['major']['archive']['duration_text'].strip()
title = self.removeHtmlTags(vod['modules']['module_dynamic']['major']['archive']['title']).strip()
img = vod['modules']['module_dynamic']['major']['archive']['cover']
videos.append({
"vod_id": vid,
"vod_name": title,
"vod_pic": img,
"vod_remarks": remark
})
elif cid == "收藏夹":
userid = self.getUserid(cookie)
if userid is None:
return {}, 1
url = f'http://api.bilibili.com/x/v3/fav/folder/created/list-all?up_mid={userid}&jsonp=jsonp'
r = self.fetch(url, cookies=cookie, headers=self.header, timeout=5)
data = json.loads(self.cleanText(r.text))
vodList = data['data']['list']
pagecount = page
for vod in vodList:
vid = vod['id']
title = vod['title'].strip()
remark = vod['media_count']
img = 'https://api-lmteam.koyeb.app/files/shoucang.png'
videos.append({
"vod_id": f'fav&&&{vid}',
"vod_name": title,
"vod_pic": img,
"vod_tag": 'folder',
"vod_remarks": remark
})
elif cid.startswith('fav&&&'):
cid = cid[6:]
url = f'http://api.bilibili.com/x/v3/fav/resource/list?media_id={cid}&pn={page}&ps=20&platform=web&type=0'
r = self.fetch(url, cookies=cookie, headers=self.header, timeout=5)
data = json.loads(self.cleanText(r.text))
if data['data']['has_more']:
pagecount = page + 1
else:
pagecount = page
vodList = data['data']['medias']
for vod in vodList:
vid = str(vod['id']).strip()
title = self.removeHtmlTags(vod['title']).replace("&quot;", '"')
img = vod['cover'].strip()
remark = time.strftime('%H:%M:%S', time.gmtime(vod['duration']))
if remark.startswith('00:'):
remark = remark[3:]
videos.append({
"vod_id": vid,
"vod_name": title,
"vod_pic": img,
"vod_remarks": remark
})
elif cid.startswith('UP主&&&'):
cid = cid[6:]
params = {'mid': cid, 'ps': 30, 'pn': page}
params = self.encWbi(params, imgKey, subKey)
url = 'https://api.bilibili.com/x/space/wbi/arc/search?'
for key in params:
url += f'&{key}={quote(params[key])}'
r = self.fetch(url, cookies=cookie, headers=self.header, timeout=5)
data = json.loads(self.cleanText(r.text))
if page < data['data']['page']['count']:
pagecount = page + 1
else:
pagecount = page
if page == 1:
videos = [{"vod_id": f'UP主&&&{tid}', "vod_name": '播放列表'}]
vodList = data['data']['list']['vlist']
for vod in vodList:
vid = str(vod['aid']).strip()
title = self.removeHtmlTags(vod['title']).replace("&quot;", '"')
img = vod['pic'].strip()
remarkinfos = vod['length'].split(':')
minutes = int(remarkinfos[0])
if minutes >= 60:
hours = str(minutes // 60)
minutes = str(minutes % 60)
if len(hours) == 1:
hours = '0' + hours
if len(minutes) == 1:
minutes = '0' + minutes
remark = hours + ':' + minutes + ':' + remarkinfos[1]
else:
remark = vod['length']
videos.append({
"vod_id": vid,
"vod_name": title,
"vod_pic": img,
"vod_remarks": remark
})
elif cid == '历史记录':
url = f'http://api.bilibili.com/x/v2/history?pn={page}'
r = self.fetch(url, cookies=cookie, headers=self.header, timeout=5)
data = json.loads(self.cleanText(r.text))
if len(data['data']) == 300:
pagecount = page + 1
else:
pagecount = page
vodList = data['data']
for vod in vodList:
if vod['duration'] <= 0:
continue
vid = str(vod["aid"]).strip()
img = vod["pic"].strip()
title = self.removeHtmlTags(vod["title"]).replace("&quot;", '"')
if vod['progress'] != -1:
process = time.strftime('%H:%M:%S', time.gmtime(vod['progress']))
totalTime = time.strftime('%H:%M:%S', time.gmtime(vod['duration']))
if process.startswith('00:'):
process = process[3:]
if totalTime.startswith('00:'):
totalTime = totalTime[3:]
remark = process + '|' + totalTime
videos.append({
"vod_id": vid,
"vod_name": title,
"vod_pic": img,
"vod_remarks": remark
})
else:
url = 'https://api.bilibili.com/x/web-interface/search/type?search_type=video&keyword={}&page={}'
for key in ext:
if key == 'tid':
cid = ext[key]
continue
url += f'&{key}={ext[key]}'
url = url.format(cid, page)
r = self.fetch(url, cookies=cookie, headers=self.header, timeout=5)
data = json.loads(self.cleanText(r.text))
pagecount = data['data']['numPages']
vodList = data['data']['result']
for vod in vodList:
if vod['type'] != 'video':
continue
vid = str(vod['aid']).strip()
title = self.removeHtmlTags(self.cleanText(vod['title']))
img = 'https:' + vod['pic'].strip()
remarkinfo = vod['duration'].split(':')
minutes = int(remarkinfo[0])
seconds = remarkinfo[1]
if len(seconds) == 1:
seconds = '0' + seconds
if minutes >= 60:
hour = str(minutes // 60)
minutes = str(minutes % 60)
if len(hour) == 1:
hour = '0' + hour
if len(minutes) == 1:
minutes = '0' + minutes
remark = f'{hour}:{minutes}:{seconds}'
else:
minutes = str(minutes)
if len(minutes) == 1:
minutes = '0' + minutes
remark = f'{minutes}:{seconds}'
videos.append({
"vod_id": vid,
"vod_name": title,
"vod_pic": img,
"vod_remarks": remark
})
lenvideos = len(videos)
result['list'] = videos
result['page'] = page
result['pagecount'] = pagecount
result['limit'] = lenvideos
result['total'] = lenvideos
return result
def detailContent(self, did):
aid = did[0]
if aid.startswith('UP主&&&'):
bizId = aid[6:]
oid = ''
url = f'https://api.bilibili.com/x/v2/medialist/resource/list?mobi_app=web&type=1&oid={oid}&biz_id={bizId}&otype=1&ps=100&direction=false&desc=true&sort_field=1&tid=0&with_current=false'
r = self.fetch(url, headers=self.header, timeout=5)
videoList = r.json()['data']['media_list']
vod = {
"vod_id": aid,
"vod_name": '播放列表',
'vod_play_from': 'B站视频'
}
playUrl = ''
for video in videoList:
remark = time.strftime('%H:%M:%S', time.gmtime(video['duration']))
name = self.removeHtmlTags(video['title']).strip().replace("#", "-").replace('$', '*')
if remark.startswith('00:'):
remark = remark[3:]
playUrl += f"[{remark}]/{name}$bvid&&&{video['bv_id']}#"
vod['vod_play_url'] = playUrl.strip('#')
result = {'list': [vod]}
return result
url = f"https://api.bilibili.com/x/web-interface/view?aid={aid}"
r = self.fetch(url, headers=self.header, timeout=10)
data = json.loads(self.cleanText(r.text))
if "staff" in data['data']:
director = ''
for staff in data['data']['staff']:
director += '[a=cr:{{"id":"UP主&&&{}","name":"{}"}}/]{}[/a],'.format(staff['mid'], staff['name'], staff['name'])
else:
director = '[a=cr:{{"id":"UP主&&&{}","name":"{}"}}/]{}[/a]'.format(data['data']['owner']['mid'], data['data']['owner']['name'], data['data']['owner']['name'])
vod = {
"vod_id": aid,
"vod_name": self.removeHtmlTags(data['data']['title']),
"vod_pic": data['data']['pic'],
"type_name": data['data']['tname'],
"vod_year": datetime.fromtimestamp(data['data']['pubdate']).strftime('%Y-%m-%d %H:%M:%S'),
"vod_content": data['data']['desc'].replace('\xa0', ' ').replace('\n\n', '\n').strip(),
"vod_director": director
}
videoList = data['data']['pages']
playUrl = ''
for video in videoList:
remark = time.strftime('%H:%M:%S', time.gmtime(video['duration']))
name = self.removeHtmlTags(video['part']).strip().replace("#", "-").replace('$', '*')
if remark.startswith('00:'):
remark = remark[3:]
playUrl = playUrl + f"[{remark}]/{name}${aid}_{video['cid']}#"
url = f'https://api.bilibili.com/x/web-interface/archive/related?aid={aid}'
r = self.fetch(url, headers=self.header, timeout=5)
data = json.loads(self.cleanText(r.text))
videoList = data['data']
playUrl = playUrl.strip('#') + '$$$'
for video in videoList:
remark = time.strftime('%H:%M:%S', time.gmtime(video['duration']))
if remark.startswith('00:'):
remark = remark[3:]
name = self.removeHtmlTags(video['title']).strip().replace("#", "-").replace('$', '*')
playUrl = playUrl + '[{}]/{}${}_{}#'.format(remark, name, video['aid'], video['cid'])
vod['vod_play_from'] = 'B站视频$$$相关视频'
vod['vod_play_url'] = playUrl.strip('#')
result = {
'list': [
vod
]
}
return result
def searchContent(self, key, quick):
return self.searchContentPage(key, quick, '1')
def searchContentPage(self, key, quick, page):
videos = []
if quick:
result = {
'list': videos
}
return result
cookie = ''
if 'cookie' in self.extendDict:
cookie = self.extendDict['cookie']
if 'json' in self.extendDict:
r = self.fetch(self.extendDict['json'], timeout=10)
if 'cookie' in r.json():
cookie = r.json()['cookie']
if cookie == '':
cookie = '{}'
elif type(cookie) == str and cookie.startswith('http'):
cookie = self.fetch(cookie, timeout=10).text.strip()
try:
if type(cookie) == dict:
cookie = json.dumps(cookie, ensure_ascii=False)
except:
pass
cookie, _, _ = self.getCookie(cookie)
url = f'https://api.bilibili.com/x/web-interface/search/type?search_type=video&keyword={key}&page={page}'
r = self.fetch(url, headers=self.header, cookies=cookie, timeout=5)
jo = json.loads(self.cleanText(r.text))
if 'result' not in jo['data']:
return {'list': videos}, 1
vodList = jo['data']['result']
for vod in vodList:
aid = str(vod['aid']).strip()
title = self.removeHtmlTags(self.cleanText(vod['title']))
img = 'https:' + vod['pic'].strip()
try:
remarkinfo = vod['duration'].split(':')
minutes = int(remarkinfo[0])
seconds = remarkinfo[1]
except:
continue
if len(seconds) == 1:
seconds = '0' + seconds
if minutes >= 60:
hour = str(minutes // 60)
minutes = str(minutes % 60)
if len(hour) == 1:
hour = '0' + hour
if len(minutes) == 1:
minutes = '0' + minutes
remark = f'{hour}:{minutes}:{seconds}'
else:
minutes = str(minutes)
if len(minutes) == 1:
minutes = '0' + minutes
remark = f'{minutes}:{seconds}'
videos.append({
"vod_id": aid,
"vod_name": title,
"vod_pic": img,
"vod_remarks": remark
})
result = {
'list': videos
}
return result
def playerContent(self, flag, pid, vipFlags):
result = {}
if pid.startswith('bvid&&&'):
url = "https://api.bilibili.com/x/web-interface/view?bvid={}".format(pid[7:])
r = self.fetch(url, headers=self.header, timeout=10)
data = r.json()['data']
aid = data['aid']
cid = data['cid']
else:
idList = pid.split("_")
aid = idList[0]
cid = idList[1]
url = 'https://api.bilibili.com/x/player/playurl?avid={}&cid={}&qn=120&fnval=4048&fnver=0&fourk=1'.format(aid, cid)
cookie = ''
extendDict = self.extendDict
if 'cookie' in extendDict:
cookie = extendDict['cookie']
if 'json' in extendDict:
r = self.fetch(extendDict['json'], timeout=10)
if 'cookie' in r.json():
cookie = r.json()['cookie']
if cookie == '':
cookie = '{}'
elif type(cookie) == str and cookie.startswith('http'):
cookie = self.fetch(cookie, timeout=10).text.strip()
try:
if type(cookie) == dict:
cookie = json.dumps(cookie, ensure_ascii=False)
except:
pass
cookiesDict, _, _ = self.getCookie(cookie)
cookies = quote(json.dumps(cookiesDict))
if 'thread' in extendDict:
thread = str(extendDict['thread'])
else:
thread = '0'
result["parse"] = 0
result["playUrl"] = ''
result["url"] = f'http://127.0.0.1:9978/proxy?do=py&type=mpd&cookies={cookies}&url={quote(url)}&aid={aid}&cid={cid}&thread={thread}'
result["header"] = self.header
result['danmaku'] = 'https://api.bilibili.com/x/v1/dm/list.so?oid={}'.format(cid)
result["format"] = 'application/dash+xml'
return result
def localProxy(self, params):
if params['type'] == "mpd":
return self.proxyMpd(params)
if params['type'] == "media":
return self.proxyMedia(params)
return None
def destroy(self):
pass
def proxyMpd(self, params):
content, durlinfos, mediaType = self.getDash(params)
if mediaType == 'mpd':
return [200, "application/dash+xml", content]
else:
url = ''
urlList = [content] + durlinfos['durl'][0]['backup_url'] if 'backup_url' in durlinfos['durl'][0] and durlinfos['durl'][0]['backup_url'] else [content]
for url in urlList:
if 'mcdn.bilivideo.cn' not in url:
break
header = self.header.copy()
if 'range' in params:
header['Range'] = params['range']
if '127.0.0.1:7777' in url:
header["Location"] = url
return [302, "video/MP2T", None, header]
r = requests.get(url, headers=header, stream=True)
return [206, "application/octet-stream", r.content]
def proxyMedia(self, params, forceRefresh=False):
_, dashinfos, _ = self.getDash(params)
if 'videoid' in params:
videoid = int(params['videoid'])
dashinfo = dashinfos['video'][videoid]
elif 'audioid' in params:
audioid = int(params['audioid'])
dashinfo = dashinfos['audio'][audioid]
else:
return [404, "text/plain", ""]
url = ''
urlList = [dashinfo['baseUrl']] + dashinfo['backupUrl'] if 'backupUrl' in dashinfo and dashinfo['backupUrl'] else [dashinfo['baseUrl']]
for url in urlList:
if 'mcdn.bilivideo.cn' not in url:
break
if url == "":
return [404, "text/plain", ""]
header = self.header.copy()
if 'range' in params:
header['Range'] = params['range']
r = requests.get(url, headers=header, stream=True)
return [206, "application/octet-stream", r.content]
def getDash(self, params, forceRefresh=False):
aid = params['aid']
cid = params['cid']
url = unquote(params['url'])
if 'thread' in params:
thread = params['thread']
else:
thread = 0
header = self.header.copy()
cookieDict = json.loads(params['cookies'])
key = f'bilivdmpdcache_{aid}_{cid}'
if forceRefresh:
self.delCache(key)
else:
data = self.getCache(key)
if data:
return data['content'], data['dashinfos'], data['type']
cookies = cookieDict.copy()
r = self.fetch(url, cookies=cookies, headers=header, timeout=5)
data = json.loads(self.cleanText(r.text))
if data['code'] != 0:
return '', {}, ''
if not 'dash' in data['data']:
purl = data['data']['durl'][0]['url']
try:
expiresAt = int(re.search(r'deadline=(\d+)', purl).group(1)) - 60
except:
expiresAt = int(time.time()) + 600
if int(thread) > 0:
try:
self.fetch('http://127.0.0.1:7777')
except:
self.fetch('http://127.0.0.1:9978/go')
purl = f'http://127.0.0.1:7777?url={quote(purl)}&thread={thread}'
self.setCache(key, {'content': purl, 'type': 'mp4', 'dashinfos': data['data'], 'expiresAt': expiresAt})
return purl, data['data'], 'mp4'
dashinfos = data['data']['dash']
duration = dashinfos['duration']
minBufferTime = dashinfos['minBufferTime']
videoinfo = ''
videoid = 0
deadlineList = []
for video in dashinfos['video']:
try:
deadline = int(re.search(r'deadline=(\d+)', video['baseUrl']).group(1))
except:
deadline = int(time.time()) + 600
deadlineList.append(deadline)
codecs = video['codecs']
bandwidth = video['bandwidth']
frameRate = video['frameRate']
height = video['height']
width = video['width']
void = video['id']
vidparams = params.copy()
vidparams['videoid'] = videoid
baseUrl = f'http://127.0.0.1:9978/proxy?do=py&type=media&cookies={quote(json.dumps(cookies))}&url={quote(url)}&aid={aid}&cid={cid}&videoid={videoid}'
videoinfo = videoinfo + f""" <Representation bandwidth="{bandwidth}" codecs="{codecs}" frameRate="{frameRate}" height="{height}" id="{void}" width="{width}">
<BaseURL>{baseUrl}</BaseURL>
<SegmentBase indexRange="{video['SegmentBase']['indexRange']}">
<Initialization range="{video['SegmentBase']['Initialization']}"/>
</SegmentBase>
</Representation>\n"""
videoid += 1
audioinfo = ''
audioid = 0
# audioList = sorted(dashinfos['audio'], key=lambda x: x['bandwidth'], reverse=True)
for audio in dashinfos['audio']:
try:
deadline = int(re.search(r'deadline=(\d+)', audio['baseUrl']).group(1))
except:
deadline = int(time.time()) + 600
deadlineList.append(deadline)
bandwidth = audio['bandwidth']
codecs = audio['codecs']
aoid = audio['id']
aidparams = params.copy()
aidparams['audioid'] = audioid
baseUrl = f'http://127.0.0.1:9978/proxy?do=py&type=media&cookies={quote(json.dumps(cookies))}&url={quote(url)}&aid={aid}&cid={cid}&audioid={audioid}'
audioinfo = audioinfo + f""" <Representation audioSamplingRate="44100" bandwidth="{bandwidth}" codecs="{codecs}" id="{aoid}">
<BaseURL>{baseUrl}</BaseURL>
<SegmentBase indexRange="{audio['SegmentBase']['indexRange']}">
<Initialization range="{audio['SegmentBase']['Initialization']}"/>
</SegmentBase>
</Representation>\n"""
audioid += 1
mpd = f"""<?xml version="1.0" encoding="UTF-8"?>
<MPD xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-on-demand:2011" type="static" mediaPresentationDuration="PT{duration}S" minBufferTime="PT{minBufferTime}S">
<Period>
<AdaptationSet mimeType="video/mp4" startWithSAP="1" scanType="progressive" segmentAlignment="true">
{videoinfo.strip()}
</AdaptationSet>
<AdaptationSet mimeType="audio/mp4" startWithSAP="1" segmentAlignment="true" lang="und">
{audioinfo.strip()}
</AdaptationSet>
</Period>
</MPD>"""
expiresAt = min(deadlineList) - 60
self.setCache(key, {'type': 'mpd', 'content': mpd.replace('&', '&amp;'), 'dashinfos': dashinfos, 'expiresAt': expiresAt})
return mpd.replace('&', '&amp;'), dashinfos, 'mpd'
def getCookie(self, cookie):
if '{' in cookie and '}' in cookie:
cookies = json.loads(cookie)
else:
cookies = dict([co.strip().split('=', 1) for co in cookie.strip(';').split(';')])
bblogin = self.getCache('bblogin')
if bblogin:
imgKey = bblogin['imgKey']
subKey = bblogin['subKey']
return cookies, imgKey, subKey
header = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.54 Safari/537.36"
}
r = requests.get("http://api.bilibili.com/x/web-interface/nav", cookies=cookies, headers=header, timeout=10)
data = json.loads(r.text)
code = data["code"]
if code == 0:
imgKey = data['data']['wbi_img']['img_url'].rsplit('/', 1)[1].split('.')[0]
subKey = data['data']['wbi_img']['sub_url'].rsplit('/', 1)[1].split('.')[0]
self.setCache('bblogin', {'imgKey': imgKey, 'subKey': subKey, 'expiresAt': int(time.time()) + 1200})
return cookies, imgKey, subKey
r = self.fetch("https://www.bilibili.com/", headers=header, timeout=5)
cookies = r.cookies.get_dict()
imgKey = ''
subKey = ''
return cookies, imgKey, subKey
def getUserid(self, cookie):
# 获取自己的userid(cookies拥有者)
url = 'http://api.bilibili.com/x/space/myinfo'
r = self.fetch(url, cookies=cookie, headers=self.header, timeout=5)
data = json.loads(self.cleanText(r.text))
if data['code'] == 0:
return data['data']['mid']
def removeHtmlTags(self, src):
from re import sub, compile
clean = compile('<.*?>')
return sub(clean, '', src)
def encWbi(self, params, imgKey, subKey):
from hashlib import md5
from functools import reduce
from urllib.parse import urlencode
mixinKeyEncTab = [46, 47, 18, 2, 53, 8, 23, 32, 15, 50, 10, 31, 58, 3, 45, 35, 27, 43, 5, 49, 33, 9, 42, 19, 29, 28, 14, 39, 12, 38, 41, 13, 37, 48, 7, 16, 24, 55, 40, 61, 26, 17, 0, 1, 60, 51, 30, 4, 22, 25, 54, 21, 56, 59, 6, 63, 57, 62, 11, 36, 20, 34, 44, 52]
orig = imgKey + subKey
mixinKey = reduce(lambda s, i: s + orig[i], mixinKeyEncTab, '')[:32]
params['wts'] = round(time.time()) # 添加 wts 字段
params = dict(sorted(params.items())) # 按照 key 重排参数
# 过滤 value 中的 "!'()*" 字符
params = {
k: ''.join(filter(lambda chr: chr not in "!'()*", str(v)))
for k, v
in params.items()
}
query = urlencode(params) # 序列化参数
params['w_rid'] = md5((query + mixinKey).encode()).hexdigest() # 计算 w_rid
return params
retry = 0
header = {
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.54 Safari/537.36",
"Referer": "https://www.bilibili.com"
}

159
js/py/aowuplugin/py_lreeok.py Executable file
View File

@ -0,0 +1,159 @@
# -*- coding: utf-8 -*-
# by @嗷呜
# 温馨提示官方APP数据是错误的你们可以给官方反馈然后就可以写APP
import re
import sys
from Crypto.Hash import MD5
sys.path.append("..")
import json
import time
from pyquery import PyQuery as pq
from base.spider import Spider
class Spider(Spider):
def init(self, extend=""):
pass
def getName(self):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def action(self, action):
pass
def destroy(self):
pass
host = 'https://www.lreeok.vip'
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'sec-ch-ua-platform': '"macOS"',
'sec-ch-ua': '"Not/A)Brand";v="8", "Chromium";v="134", "Google Chrome";v="134"',
'Origin': host,
'Referer': f"{host}/",
}
def homeContent(self, filter):
data = self.getpq(self.fetch(self.host, headers=self.headers).text)
result = {}
classes = []
for k in data('.head-more.box a').items():
i = k.attr('href')
if i and '/vod' in i:
classes.append({
'type_name': k.text(),
'type_id': re.search(r'\d+', i).group(0)
})
result['class'] = classes
result['list'] = self.getlist(data('.border-box.diy-center .public-list-div'))
return result
def homeVideoContent(self):
pass
def categoryContent(self, tid, pg, filter, extend):
body = {'type': tid, 'class': '', 'area': '', 'lang': '', 'version': '', 'state': '', 'letter': '', 'page': pg}
data = self.post(f"{self.host}/index.php/api/vod", headers=self.headers, data=self.getbody(body)).json()
result = {}
result['list'] = data['list']
result['page'] = pg
result['pagecount'] = 9999
result['limit'] = 90
result['total'] = 999999
return result
def detailContent(self, ids):
data = self.getpq(self.fetch(f"{self.host}/voddetail/{ids[0]}.html", headers=self.headers).text)
v = data('.detail-info.lightSpeedIn .slide-info')
vod = {
'vod_year': v.eq(-1).text(),
'vod_remarks': v.eq(0).text(),
'vod_actor': v.eq(3).text(),
'vod_director': v.eq(2).text(),
'vod_content': data('.switch-box #height_limit').text()
}
np = data('.anthology.wow.fadeInUp')
ndata = np('.anthology-tab .swiper-wrapper .swiper-slide')
pdata = np('.anthology-list .anthology-list-box ul')
play, names = [], []
for i in range(len(ndata)):
n = ndata.eq(i)('a')
n('span').remove()
names.append(n.text())
vs = []
for v in pdata.eq(i)('li').items():
vs.append(f"{v.text()}${v('a').attr('href')}")
play.append('#'.join(vs))
vod["vod_play_from"] = "$$$".join(names)
vod["vod_play_url"] = "$$$".join(play)
result = {"list": [vod]}
return result
def searchContent(self, key, quick, pg="1"):
data = self.getpq(self.fetch(f"{self.host}/vodsearch/{key}----------{pg}---.html", headers=self.headers).text)
return {'list': self.getlist(data('.row-right .search-box .public-list-bj')), 'page': pg}
def playerContent(self, flag, id, vipFlags):
h, p = {"User-Agent": "okhttp/3.14.9"}, 1
url = f"{self.host}{id}"
data = self.getpq(self.fetch(url, headers=self.headers).text)
try:
jstr = data('.player .player-left script').eq(0).text()
jsdata = json.loads(jstr.split('aaa=')[-1])
body = {'url': jsdata['url']}
if not re.search(r'\.m3u8|\.mp4', body['url']):
data = self.post(f"{self.host}/okplay/api_config.php", headers=self.headers,
data=self.getbody(body)).json()
url = data.get('url') or data.get('data', {}).get('url')
p = 0
except Exception as e:
print('错误信息:', e)
pass
result = {}
result["parse"] = p
result["url"] = url
result["header"] = h
return result
def localProxy(self, param):
pass
def getbody(self, params):
t = int(time.time())
h = MD5.new()
h.update(f"DS{t}DCC147D11943AF75".encode('utf-8'))
key = h.hexdigest()
params.update({'time': t, 'key': key})
return params
def getlist(self, data):
videos = []
for i in data.items():
id = i('a').attr('href')
if id:
id = re.search(r'\d+', id).group(0)
img = i('img').attr('data-src')
if img and 'url=' in img: img = f'{self.host}{img}'
videos.append({
'vod_id': id,
'vod_name': i('img').attr('alt'),
'vod_pic': img,
'vod_remarks': i('.public-prt').text() or i('.public-list-prb').text()
})
return videos
def getpq(self, data):
try:
return pq(data)
except Exception as e:
print(f"{str(e)}")
return pq(data.encode('utf-8'))

94
js/py/aowuplugin/py_mp.py Executable file
View File

@ -0,0 +1,94 @@
# coding=utf-8
# !/usr/bin/python
import sys
sys.path.append('..')
from base.spider import Spider
class Spider(Spider):
def getName(self):
return "mp"
def init(self, extend=""):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def destroy(self):
pass
host = 'https://g.c494.com'
header = {
'User-Agent': 'Dart/2.10 (dart:io)',
'platform_version': 'RP1A.200720.011',
'version': '2.2.3',
'copyright': 'xiaogui',
'platform': 'android',
'client_name': '576O5p+P5b2x6KeG',
}
def homeContent(self, filter):
data = self.fetch(f'{self.host}/api.php/app/nav?token=', headers=self.header).json()
dy = {"class": "类型", "area": "地区", "lang": "语言", "year": "年份", "letter": "字母", "by": "排序",
"sort": "排序"}
filters = {}
classes = []
json_data = data["list"]
for item in json_data:
has_non_empty_field = False
jsontype_extend = item["type_extend"]
classes.append({"type_name": item["type_name"], "type_id": str(item["type_id"])})
for key in dy:
if key in jsontype_extend and jsontype_extend[key].strip() != "":
has_non_empty_field = True
break
if has_non_empty_field:
filters[str(item["type_id"])] = []
for dkey in jsontype_extend:
if dkey in dy and jsontype_extend[dkey].strip() != "":
values = jsontype_extend[dkey].split(",")
value_array = [{"n": value.strip(), "v": value.strip()} for value in values if
value.strip() != ""]
filters[str(item["type_id"])].append({"key": dkey, "name": dy[dkey], "value": value_array})
result = {}
result["class"] = classes
result["filters"] = filters
return result
def homeVideoContent(self):
rsp = self.fetch(f"{self.host}/api.php/app/index_video?token=", headers=self.header)
root = rsp.json()['list']
videos = [item for vodd in root for item in vodd['vlist']]
return {'list': videos}
def categoryContent(self, tid, pg, filter, extend):
parms = {"pg": pg, "tid": tid, "class": extend.get("class", ""), "area": extend.get("area", ""),
"lang": extend.get("lang", ""), "year": extend.get("year", ""), "token": ""}
data = self.fetch(f'{self.host}/api.php/app/video', params=parms, headers=self.header).json()
return data
def detailContent(self, ids):
parms = {"id": ids[0], "token": ""}
data = self.fetch(f'{self.host}/api.php/app/video_detail', params=parms, headers=self.header).json()
vod = data['data']
vod.pop('pause_advert_list', None)
vod.pop('init_advert_list', None)
vod.pop('vod_url_with_player', None)
return {"list": [vod]}
def searchContent(self, key, quick, pg='1'):
parms = {'pg': pg, 'text': key, 'token': ''}
data = self.fetch(f'{self.host}/api.php/app/search', params=parms, headers=self.header).json()
return data
def playerContent(self, flag, id, vipFlags):
return {"parse": 0, "url": id, "header": {'User-Agent': 'User-Agent: Lavf/58.12.100'}}
def localProxy(self, param):
pass

172
js/py/aowuplugin/py_xpg.py Executable file
View File

@ -0,0 +1,172 @@
# coding=utf-8
# !/usr/bin/python
import sys
sys.path.append('')
from base.spider import Spider
from urllib.parse import quote
class Spider(Spider):
def getName(self):
return "xpg"
def init(self, extend=""):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def destroy(self):
pass
def homeContent(self, filter):
data = self.fetch(
"{0}/api.php/v2.vod/androidtypes".format(self.host),
headers=self.header,
).json()
dy = {
"classes": "类型",
"areas": "地区",
"years": "年份",
"sortby": "排序",
}
filters = {}
classes = []
for item in data['data']:
has_non_empty_field = False
item['soryby'] = ['updatetime', 'hits', 'score']
demos = ['时间', '人气', '评分']
classes.append({"type_name": item["type_name"], "type_id": str(item["type_id"])})
for key in dy:
if key in item and len(item[key]) > 1:
has_non_empty_field = True
break
if has_non_empty_field:
filters[str(item["type_id"])] = []
for dkey in item:
if dkey in dy and len(item[dkey]) > 1:
values = item[dkey]
value_array = [
{"n": demos[idx] if dkey == "sortby" else value.strip(), "v": value.strip()}
for idx, value in enumerate(values)
if value.strip() != ""
]
filters[str(item["type_id"])].append(
{"key": dkey, "name": dy[dkey], "value": value_array}
)
result = {}
result["class"] = classes
result["filters"] = filters
return result
host = "http://item.xpgtv.com"
header = {
'User-Agent': 'okhttp/3.12.11',
'token': 'ElEDlwCVgXcFHFhddiq2JKteHofExRBUrfNlmHrWetU3VVkxnzJAodl52N9EUFS+Dig2A/fBa/V9RuoOZRBjYvI+GW8kx3+xMlRecaZuECdb/3AdGkYpkjW3wCnpMQxf8vVeCz5zQLDr8l8bUChJiLLJLGsI+yiNskiJTZz9HiGBZhZuWh1mV1QgYah5CLTbSz8=',
'token2': 'a0kEsBKRgTkBZ29NZ3WcNKN/C4T00RN/hNkmmGa5JMBeEENnqydLoetm/t8=',
'user_id': 'XPGBOX',
'version': 'XPGBOX com.phoenix.tv1.5.3',
'timestamp': '1732286435',
'hash': 'd9ab',
}
def homeVideoContent(self):
rsp = self.fetch("{0}/api.php/v2.main/androidhome".format(self.host), headers=self.header)
root = rsp.json()['data']['list']
videos = []
for vodd in root:
for vod in vodd['list']:
videos.append({
"vod_id": vod['id'],
"vod_name": vod['name'],
"vod_pic": vod['pic'],
"vod_remarks": vod['score']
})
result = {
'list': videos
}
return result
def categoryContent(self, tid, pg, filter, extend):
parms = []
parms.append(f"page={pg}")
parms.append(f"type={tid}")
if extend.get('areas'):
parms.append(f"area={quote(extend['areaes'])}")
if extend.get('years'):
parms.append(f"year={quote(extend['yeares'])}")
if extend.get('sortby'):
parms.append(f"sortby={extend['sortby']}")
if extend.get('classes'):
parms.append(f"class={quote(extend['classes'])}")
parms = "&".join(parms)
result = {}
url = '{0}/api.php/v2.vod/androidfilter10086?{1}'.format(self.host, parms)
rsp = self.fetch(url, headers=self.header)
root = rsp.json()['data']
videos = []
for vod in root:
videos.append({
"vod_id": vod['id'],
"vod_name": vod['name'],
"vod_pic": vod['pic'],
"vod_remarks": vod['score']
})
result['list'] = videos
result['page'] = pg
result['pagecount'] = 9999
result['limit'] = 90
result['total'] = 999999
return result
def detailContent(self, ids):
id = ids[0]
url = '{0}/api.php/v3.vod/androiddetail2?vod_id={1}'.format(self.host, id)
rsp = self.fetch(url, headers=self.header)
root = rsp.json()['data']
node = root['urls']
d = [it['key'] + "$" + f"http://c.xpgtv.net/m3u8/{it['url']}.m3u8" for it in node]
vod = {
"vod_name": root['name'],
'vod_play_from': '小苹果',
'vod_play_url': '#'.join(d),
}
print(vod)
result = {
'list': [
vod
]
}
return result
def searchContent(self, key, quick, pg='1'):
url = '{0}/api.php/v2.vod/androidsearch10086?page={1}&wd={2}'.format(self.host, pg, key)
rsp = self.fetch(url, headers=self.header)
root = rsp.json()['data']
videos = []
for vod in root:
videos.append({
"vod_id": vod['id'],
"vod_name": vod['name'],
"vod_pic": vod['pic'],
"vod_remarks": vod['score']
})
result = {
'list': videos
}
return result
def playerContent(self, flag, id, vipFlags):
result = {}
result["parse"] = 0
result["url"] = id
result["header"] = self.header
return result
def localProxy(self, param):
pass

View File

@ -0,0 +1,72 @@
#coding=utf-8
#!/usr/bin/python
import sys
sys.path.append('..')
from base.spider import Spider
class Spider(Spider):
def init(self,extend=""):
self.base_url='http://api.hclyz.com:81/mf'
def homeContent(self,filter):
classes = [{"type_name": "色播聚合","type_id":"/json.txt"}]
result = {"class": classes}
return result
def categoryContent(self,tid,pg,filter,extend):
home = self.fetch(f'{self.base_url}/json.txt').json()
data = home.get("pingtai")[1:]
videos = [
{
"vod_id": "/" + item['address'],
"vod_name": item['title'],
"vod_pic": item['xinimg'].replace("http://cdn.gcufbd.top/img/",
"https://slink.ltd/https://raw.githubusercontent.com/fish2018/lib/refs/heads/main/imgs/"),
"vod_remarks": item['Number'],
"style": {"type": "rect", "ratio": 1.33}
} for item in sorted(data, key=lambda x: int(x['Number']), reverse=True)
]
result = {
"page": pg,
"pagecount": 1,
"limit": len(videos),
"total": len(videos),
"list": videos
}
return result
def detailContent(self,array):
id = array[0]
data = self.fetch(f'{self.base_url}/{id}').json()
zhubo = data['zhubo']
playUrls = '#'.join([f"{vod['title']}${vod['address']}" for vod in zhubo])
vod = [{
"vod_play_from": 'sebo',
"vod_play_url": playUrls,
"vod_content": 'https://github.com/fish2018',
}]
result = {"list": vod}
return result
def playerContent(self,flag,id,vipFlags):
result = {
'parse': 0,
'url': id
}
return result
def getName(self):
return '色播聚合'
def homeVideoContent(self):
pass
def isVideoFormat(self,url):
pass
def manualVideoCheck(self):
pass
def searchContent(self,key,quick):
pass
def destroy(self):
pass
def localProxy(self, param):
pass

591
js/py/aowuplugin/upurl.py Executable file
View File

@ -0,0 +1,591 @@
import json
import requests
import warnings
import re
import os
import time
from urllib3.exceptions import InsecureRequestWarning
from copy import deepcopy
from concurrent.futures import ThreadPoolExecutor
# 自定义 jsm.json 的路径或网络地址,留空则使用当前目录下的 jsm.json
jsm_file_path = ""
# 读取 jsm.json 文件
jsm_data = {}
if jsm_file_path:
if jsm_file_path.startswith(("http://", "https://")):
try:
response = requests.get(jsm_file_path)
jsm_data = response.json()
except Exception as e:
print(f"从网络读取 jsm.json 配置文件失败: {str(e)}")
else:
if os.path.exists(jsm_file_path):
try:
with open(jsm_file_path, 'r', encoding='utf-8') as f:
jsm_data = json.load(f)
except Exception as e:
print(f"读取本地 jsm.json 配置文件失败: {str(e)}")
else:
print(f"本地 jsm.json 文件 {jsm_file_path} 不存在")
else:
local_path = os.path.join(os.getcwd(), 'jsm.json')
if os.path.exists(local_path):
try:
with open(local_path, 'r', encoding='utf-8') as f:
jsm_data = json.load(f)
except Exception as e:
print(f"读取默认 jsm.json 配置文件失败: {str(e)}")
else:
print("默认的 jsm.json 文件不存在")
# 站点映射关系
site_mappings = {
'立播': 'libo', '闪电':'shandian', '欧哥': 'ouge', '小米': 'xiaomi', '多多': 'duoduo',
'蜡笔': 'labi', '至臻': 'zhizhen', '木偶':'mogg', '六趣': 'liuqu', '虎斑': 'huban',
'下饭': 'xiafan', '玩偶': 'wogg', '星剧社':'star2', '二小': 'xhww'
}
# 代理配置
proxy_config = {
"enabled": False,
"proxies": {
"http": "http://127.0.0.1:7890",
"https": "http://127.0.0.1:7890"
}
}
# 文件路径配置
file_path_config = {
"input_dir": "",
"output_dir": ""
}
# 新增jsm映射配置
jsm_mapping = {
"Libvio": "libo",
"Xiaomi": "xiaomi",
"yydsys": "duoduo",
"蜡笔网盘": "labi",
"玩偶 | 蜡笔": "labi",
"至臻|网盘": "zhizhen",
"Huban": "huban",
"Wogg": "wogg",
"Mogg": "mogg",
"玩偶 | 闪电uc": "shandian",
"玩偶 | 二小": "xhww",
"玩偶 | 小米": "xiaomi",
"玩偶 | 多多": "duoduo",
"玩偶 | 木偶": "mogg",
"玩偶gg": "wogg",
"星剧社": "star2"
}
# 需要拼接搜索路径的站点配置
search_path_config = {
'闪电': '/index.php/vod/search.html?wd=仙台有树',
'欧哥': '/index.php/vod/search.html?wd=仙台有树',
'小米': '/index.php/vod/search.html?wd=仙台有树',
'多多': '/index.php/vod/search.html?wd=仙台有树',
'蜡笔': '/index.php/vod/search.html?wd=仙台有树',
'至臻': '/index.php/vod/search.html?wd=仙台有树',
'六趣': '/index.php/vod/search.html?wd=仙台有树',
'虎斑': '/index.php/vod/search.html?wd=仙台有树',
'下饭': '/index.php/vod/search.html?wd=仙台有树',
'玩偶': '/vodsearch/-------------.html?wd=仙台有树',
'木偶': '/index.php/vod/search.html?wd=仙台有树',
'二小': '/index.php/vod/search.html?wd=仙台有树',
'立播': '/search/-------------.html?wd=仙台有树&submit='
}
# 定义需要校验关键字的站点及其关键字
keyword_required_sites = {
'闪电': 'class="search-stat"',
'欧哥': 'class="search-stat"',
'小米': 'class="search-stat"',
'多多': 'class="search-stat"',
'蜡笔': 'class="search-stat"',
'至臻': 'class="search-stat"',
'六趣': 'class="search-stat"',
'虎斑': 'class="search-stat"',
'下饭': 'class="search-stat"',
'玩偶': 'class="search-stat"',
'木偶': 'class="search-stat"',
'二小': 'class="search-stat"',
'立播': 'class="stui-screen"'
}
# 新增可选的URL加权配置默认权重为50
url_weight_config = {
"木偶": {
"https://aliii.deno.dev": 60,
"http://149.88.87.72:5666": 60
},
"至臻": {
"http://www.xhww.net": 10,
"http://xhww.net": 10
},
"立播": {
"https://libvio.mov": 60,
"https://www.libvio.cc": 60
}
}
# 兜底URL配置
fallback_url_config = {
"立播": [
"https://libvio.mov",
"https://www.libvio.cc",
"https://libvio.la",
"https://libvio.pro",
"https://libvio.fun",
"https://libvio.me",
"https://libvio.in",
"https://libvio.site",
"https://libvio.art",
"https://libvio.com",
"https://libvio.vip",
"https://libvio.pw",
"https://libvio.link"
],
"闪电": [
"http://1.95.79.193",
"http://1.95.79.193:666"
],
"欧哥": [
"https://woog.nxog.eu.org"
],
"小米": [
"http://www.54271.fun",
"https://www.milvdou.fun",
"http://www.54271.fun",
"https://www.mucpan.cc",
"https://mucpan.cc",
"http://milvdou.fun"
],
"多多": [
"https://tv.yydsys.top",
"https://tv.yydsys.cc",
"https://tv.214521.xyz",
"http://155.248.200.65"
],
"蜡笔": [
"http://feimaoai.site",
"https://feimao666.fun",
"http://feimao888.fun"
],
"至臻": [
"https://mihdr.top",
"http://www.miqk.cc",
"http://www.xhww.net",
"http://xhww.net",
"https://xiaomiai.site"
],
"六趣": [
"https://wp.0v.fit"
],
"虎斑": [
"http://103.45.162.207:20720"
],
"下饭": [
"http://txfpan.top",
"http://www.xn--ghqy10g1w0a.xyz"
],
"玩偶": [
"https://wogg.xxooo.cf",
"https://wogg.333232.xyz",
"https://www.wogg.one",
"https://www.wogg.lol",
"https://www.wogg.net"
],
"木偶": [
"https://tv.91muou.icu",
"https://mo.666291.xyz",
"https://mo.muouso.fun",
"https://aliii.deno.dev",
"http://149.88.87.72:5666"
],
"星剧社": [
"https://mlink.cc/520TV"
],
"二小": [
"https://xhww.net",
"https://www.xhww.net"
]
}
# 全局状态
last_site = None
def log_message(message, site_name=None, step="", max_error_length=80):
"""格式化日志打印"""
global last_site
status_emojis = {
'[开始]': '🚀', '[成功]': '', '[完成]': '🎉', '[失败]': '',
'[超时]': '', '[警告]': '⚠️', '[错误]': '🚨', '[信息]': '',
'[选择]': '🔍', '[连接失败]': '🔌'
}
if site_name and site_name != last_site:
print(f"\n{'' + '=' * 38 + ''}")
print(f"🌐 [站点: {site_name}]")
print(f"{'' + '=' * 38 + ''}")
last_site = site_name
for status, emoji in status_emojis.items():
if status in message:
message = message.replace(status, f"{status} {emoji}")
break
else:
message = f"{message} 📢"
# 截断过长的错误信息
if "[连接失败]" in message or "[错误]" in message:
if len(message) > max_error_length:
message = message[:max_error_length] + "..."
print(f"[{time.strftime('%Y-%m-%d %H:%M:%S')}] [{step}] {message}") if step else print(message)
def test_url(url, site_name=None):
"""增强版URL测试函数"""
search_path = search_path_config.get(site_name)
test_url = url.strip() + search_path if search_path else url.strip()
keyword = keyword_required_sites.get(site_name)
session = requests.Session()
adapter = requests.adapters.HTTPAdapter(max_retries=2)
session.mount('http://', adapter)
session.mount('https://', adapter)
try:
# 直接请求测试
response = session.get(
test_url,
timeout=7,
verify=False,
headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'}
)
if response.status_code == 200:
latency = response.elapsed.total_seconds()
has_keyword = keyword in response.text if keyword else True
log_msg = f"直接访问成功 | 延迟: {latency:.2f}s"
if keyword:
log_msg += f" | 关键字: {'' if has_keyword else ''}"
log_message(f"[成功] {test_url} {log_msg}", site_name, "URL测试")
return latency, has_keyword
log_message(f"[失败] HTTP状态码 {response.status_code}", site_name, "URL测试")
return None, None
except requests.RequestException as e:
error_type = "[超时]" if isinstance(e, requests.Timeout) else "[连接失败]"
log_message(f"{error_type} {str(e)}", site_name, "URL测试")
# 代理重试逻辑
if proxy_config["enabled"]:
try:
response = session.get(
test_url,
timeout=7,
verify=False,
proxies=proxy_config["proxies"],
headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'}
)
if response.status_code == 200:
latency = response.elapsed.total_seconds()
has_keyword = keyword in response.text if keyword else True
log_message(f"[成功] 代理访问成功 | 延迟: {latency:.2f}s | 关键字: {'' if has_keyword else ''}",
site_name, "URL测试")
return latency, has_keyword
except Exception as proxy_e:
log_message(f"[失败] 代理访问错误: {str(proxy_e)}", site_name, "URL测试")
return None, None
def get_best_url(urls, site_name=None, existing_url=None):
"""优化后的URL选择算法"""
if not isinstance(urls, list):
return urls
weights = url_weight_config.get(site_name, {})
default_weight = 50
sorted_urls = sorted([(url, weights.get(url, default_weight)) for url in urls],
key=lambda x: -x[1])
def test_single_url(url_weight):
url, weight = url_weight
latency, has_keyword = test_url(url, site_name)
if latency is not None:
return {
"url": url,
"latency": latency,
"has_keyword": has_keyword,
"weight": weight,
"score": (weight * 0.6) + ((1 / (latency + 0.1)) * 40)
}
return None
with ThreadPoolExecutor() as executor:
candidates = [result for result in executor.map(test_single_url, sorted_urls) if result]
if not candidates:
log_message(f"[警告] 无可用URL使用现有配置: {existing_url}" if existing_url else
"[错误] 无可用URL且无历史配置", site_name, "URL选择")
return existing_url if existing_url else None
# 按评分排序:关键字存在 > 评分 > 延迟
sorted_candidates = sorted(candidates,
key=lambda x: (-x['has_keyword'], -x['score'], x['latency']))
log_message("候选URL评估结果:\n" + "\n".join(
[f"{item['url']} | 权重:{item['weight']} 延迟:{item['latency']:.2f}s 评分:{item['score']:.1f}"
for item in sorted_candidates]), site_name, "URL选择")
best = sorted_candidates[0]
log_message(f"[选择] 最优URL: {best['url']} (评分: {best['score']:.1f})", site_name, "URL选择")
return best['url']
def get_star2_real_url(source_url):
"""改进的星剧社真实URL提取"""
try:
response = requests.get(
source_url,
timeout=8,
verify=False,
headers={'Referer': 'https://mlink.cc/'}
)
if response.status_code == 200:
# 增强版正则匹配
match = re.search(
r'''(?i)(?:href|src|data-?url)=["'](https?://[^"']*?star2\.cn[^"']*)["']''',
response.text
)
if match:
real_url = match.group(1).strip().rstrip('/')
log_message(f"[成功] 提取真实链接: {real_url}", "星剧社", "链接解析")
return real_url
log_message("[失败] 未找到有效链接", "星剧社", "链接解析")
except Exception as e:
log_message(f"[错误] 解析失败: {str(e)}", "星剧社", "链接解析")
return None
def merge_url_data(*dicts):
"""数据合并去重"""
merged = {}
for d in dicts:
if not d: continue
for site, urls in d.items():
merged.setdefault(site, []).extend(urls if isinstance(urls, list) else [urls])
return {k: list(dict.fromkeys(v)) for k, v in merged.items()}
def get_file_path(filename, is_input=True):
"""路径处理函数"""
base_dir = file_path_config.get("input_dir" if is_input else "output_dir", "")
return os.path.join(base_dir or os.getcwd(), filename)
def load_existing_config():
"""加载现有url.json配置"""
url_path = get_file_path('url.json')
if os.path.exists(url_path):
try:
with open(url_path, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception as e:
log_message(f"[错误] 读取现有配置失败: {str(e)}", step="配置加载")
return {}
def get_api_urls():
"""从本地文件获取链接"""
API_FILE_PATH = get_file_path('url.json')
try:
with open(API_FILE_PATH, 'r', encoding='utf-8') as f:
api_data = json.load(f)
print("成功读取 url.json 文件")
# 基于 jsm_mapping 生成 url_mapping
url_mapping = {key: api_data.get(value) for key, value in jsm_mapping.items()}
print("生成的 url_mapping:", url_mapping)
return url_mapping
except FileNotFoundError:
print("未找到 url.json 文件,请检查文件路径。")
except json.JSONDecodeError:
print("url.json 文件格式错误,请检查文件内容。")
return {}
def replace_urls(data, urls):
"""替换 JSON 数据中的 URL"""
# 根据 jsm_mapping 转换 api_urls
api_urls = {
jsm_key: urls.get(jsm_value)
for jsm_key, jsm_value in jsm_mapping.items()
}
sites = data.get('sites', [])
replaced_count = 0
for item in sites:
if isinstance(item, dict):
key = item.get('key')
ext = item.get('ext')
new_url = api_urls.get(key)
old_url = None
if new_url and isinstance(ext, str):
parts = ext.split('$$$')
if len(parts) > 1 and parts[1].strip().startswith('http'):
old_url = parts[1]
parts[1] = new_url
item['ext'] = '$$$'.join(parts)
replaced_count += 1
print(f"成功替换 {key} 的链接: {old_url} -> {new_url}")
if 'url' in item:
del item['url'] # 删除 url 字段
if old_url and not new_url:
print(f"未成功替换 {key} 的链接,原链接: {old_url}")
else:
print(f"跳过非字典类型的 item: {item}")
print(f"总共替换了 {replaced_count} 个链接。")
return data
def update_jsm_config(urls):
"""更新jsm.json配置文件中的URL"""
global jsm_data
if not jsm_data:
log_message("[错误] jsm_data 为空,无法更新配置", step="配置更新")
return False
updated_jsm_data = replace_urls(deepcopy(jsm_data), urls)
try:
jsm_output_path = get_file_path('jsm.json', is_input=False)
os.makedirs(os.path.dirname(jsm_output_path), exist_ok=True)
with open(jsm_output_path, 'w', encoding='utf-8') as f:
json.dump(updated_jsm_data, f, ensure_ascii=False, indent=4)
log_message("[完成] jsm.json 配置文件更新成功", step="配置更新")
return True
except Exception as e:
log_message(f"[错误] 更新 jsm.json 配置文件失败: {str(e)}", step="配置更新")
return False
def process_urls():
"""核心处理流程"""
log_message("[开始] 启动URL更新流程", step="主流程")
# 加载现有配置
existing_config = load_existing_config()
reverse_site_mapping = {v: k for k, v in site_mappings.items()}
# 数据源处理
data_sources = []
try:
remote_data = requests.get(
'https://github.catvod.com/https://raw.githubusercontent.com/celin1286/xiaosa/main/yuan.json',
timeout=10
).json()
data_sources.append(remote_data)
log_message("[成功] 远程数据加载完成", step="数据收集")
except Exception as e:
log_message(f"[错误] 远程数据获取失败: {str(e)}", step="数据收集")
local_path = get_file_path('yuan.json')
if os.path.exists(local_path):
try:
with open(local_path, 'r', encoding='utf-8') as f:
data_sources.append(json.load(f))
log_message("[成功] 本地数据加载完成", step="数据收集")
except Exception as e:
log_message(f"[错误] 本地数据读取失败: {str(e)}", step="数据收集")
data_sources.append(fallback_url_config)
merged_data = merge_url_data(*data_sources)
# 结果存储
result = {'url': {}}
stats = {'total': 0,'success': 0, 'failed': [], 'changed': []}
for cn_name, urls in merged_data.items():
stats['total'] += 1
site_key = site_mappings.get(cn_name)
existing_url = existing_config.get(site_key, '')
if cn_name == '星剧社':
best_source = get_best_url(urls, cn_name, existing_url)
final_url = get_star2_real_url(best_source) if best_source else existing_url
else:
final_url = get_best_url(urls, cn_name, existing_url) or existing_url
if final_url:
result['url'][site_key] = final_url
if existing_url and existing_url != final_url:
stats['changed'].append(f"{cn_name}: {existing_url}{final_url}")
log_message(f"[更新] 配置变更检测", cn_name, "结果处理")
stats['success'] += 1
else:
stats['failed'].append(cn_name)
log_message("[警告] 无可用URL", cn_name, "结果处理")
# 文件保存
output_files = {
'yuan.json': merged_data,
'url.json': result['url']
}
for filename, data in output_files.items():
try:
path = get_file_path(filename, is_input=False)
os.makedirs(os.path.dirname(path), exist_ok=True)
with open(path, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=2)
log_message(f"[成功] 保存文件: {path}", step="数据持久化")
except Exception as e:
log_message(f"[错误] 文件保存失败: {str(e)}", step="数据持久化")
# 新增jsm更新流程
log_message("[开始] 启动jsm配置更新", step="主流程")
update_success = update_jsm_config(result['url'])
log_message(
f"[{'成功' if update_success else '失败'}] jsm配置更新完成",
step="主流程"
)
# 统计报告
log_message(
f"[完成] 处理结果: {stats['success']}/{stats['total']} 成功\n"
f"url.json变更项 ({len(stats['changed'])}):\n" + "\n".join(stats['changed']) + "\n"
f"url.json失败项 ({len(stats['failed'])}): {', '.join(stats['failed']) if stats['failed'] else ''}",
step="统计报告"
)
return stats['success'] > 0
def main():
warnings.simplefilter('ignore', InsecureRequestWarning)
process_urls()
if __name__ == "__main__":
start_time = time.time()
main()
elapsed = time.time() - start_time
print(f"总耗时: {elapsed:.2f}")

276
js/py/aowuplugin/xhamster.py Executable file
View File

@ -0,0 +1,276 @@
# coding=utf-8
# !/usr/bin/python
# by嗷呜
import json
import sys
from base64 import b64decode, b64encode
from pyquery import PyQuery as pq
from requests import Session
sys.path.append('..')
from base.spider import Spider
class Spider(Spider):
def init(self, extend=""):
self.proxy = ''
if extend and json.loads(extend).get('proxy'):
self.proxy = json.loads(extend).get('proxy')
self.host = self.gethost()
self.headers['referer'] = f'{self.host}/'
self.session = Session()
self.session.headers.update(self.headers)
pass
def getName(self):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def destroy(self):
pass
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'sec-ch-ua': '"Not(A:Brand";v="99", "Google Chrome";v="133", "Chromium";v="133"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-full-version': '"133.0.6943.98"',
'sec-ch-ua-arch': '"x86"',
'sec-ch-ua-platform': '"Windows"',
'sec-ch-ua-platform-version': '"19.0.0"',
'sec-ch-ua-model': '""',
'sec-ch-ua-full-version-list': '"Not(A:Brand";v="99.0.0.0", "Google Chrome";v="133.0.6943.98", "Chromium";v="133.0.6943.98"',
'dnt': '1',
'upgrade-insecure-requests': '1',
'sec-fetch-site': 'none',
'sec-fetch-mode': 'navigate',
'sec-fetch-user': '?1',
'sec-fetch-dest': 'document',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8',
'priority': 'u=0, i'
}
def homeContent(self, filter):
result = {}
cateManual = {
"4K": "/4k",
"国产": "two_click_/categories/chinese",
"最新": "/newest",
"最佳": "/best",
"频道": "/channels",
"类别": "/categories",
"明星": "/pornstars"
}
classes = []
filters = {}
for k in cateManual:
classes.append({
'type_name': k,
'type_id': cateManual[k]
})
if k != '4K': filters[cateManual[k]] = [{'key': 'type', 'name': '类型', 'value': [{'n': '4K', 'v': '/4k'}]}]
result['class'] = classes
result['filters'] = filters
return result
def homeVideoContent(self):
data = self.getpq()
return {'list': self.getlist(data(".thumb-list--sidebar .thumb-list__item"))}
def categoryContent(self, tid, pg, filter, extend):
vdata = []
result = {}
result['page'] = pg
result['pagecount'] = 9999
result['limit'] = 90
result['total'] = 999999
if tid in ['/4k', '/newest', '/best'] or 'two_click_' in tid:
if 'two_click_' in tid: tid = tid.split('click_')[-1]
data = self.getpq(f'{tid}{extend.get("type", "")}/{pg}')
vdata = self.getlist(data(".thumb-list--sidebar .thumb-list__item"))
elif tid == '/channels':
data = self.getpq(f'{tid}/{pg}')
jsdata = self.getjsdata(data)
for i in jsdata['channels']:
vdata.append({
'vod_id': f"two_click_" + i.get('channelURL'),
'vod_name': i.get('channelName'),
'vod_pic': i.get('siteLogoURL'),
'vod_year': f'videos:{i.get("videoCount")}',
'vod_tag': 'folder',
'vod_remarks': f'subscribers:{i["subscriptionModel"].get("subscribers")}',
'style': {'ratio': 1.33, 'type': 'rect'}
})
elif tid == '/categories':
result['pagecount'] = pg
data = self.getpq(tid)
self.cdata = self.getjsdata(data)
for i in self.cdata['layoutPage']['store']['popular']['assignable']:
vdata.append({
'vod_id': "one_click_" + i.get('id'),
'vod_name': i.get('name'),
'vod_pic': '',
'vod_tag': 'folder',
'style': {'ratio': 1.33, 'type': 'rect'}
})
elif tid == '/pornstars':
data = self.getpq(f'{tid}/{pg}')
pdata = self.getjsdata(data)
for i in pdata['pagesPornstarsComponent']['pornstarListProps']['pornstars']:
vdata.append({
'vod_id': f"two_click_" + i.get('pageURL'),
'vod_name': i.get('name'),
'vod_pic': i.get('imageThumbUrl'),
'vod_remarks': i.get('translatedCountryName'),
'vod_tag': 'folder',
'style': {'ratio': 1.33, 'type': 'rect'}
})
elif 'one_click' in tid:
result['pagecount'] = pg
tid = tid.split('click_')[-1]
for i in self.cdata['layoutPage']['store']['popular']['assignable']:
if i.get('id') == tid:
for j in i['items']:
vdata.append({
'vod_id': f"two_click_" + j.get('url'),
'vod_name': j.get('name'),
'vod_pic': j.get('thumb'),
'vod_tag': 'folder',
'style': {'ratio': 1.33, 'type': 'rect'}
})
result['list'] = vdata
return result
def detailContent(self, ids):
data = self.getpq(ids[0])
djs = self.getjsdata(data)
vn = data('meta[property="og:title"]').attr('content')
dtext = data('#video-tags-list-container')
href = dtext('a').attr('href')
title = dtext('span[class*="body-bold-"]').eq(0).text()
pdtitle = ''
if href:
pdtitle = '[a=cr:' + json.dumps({'id': 'two_click_' + href, 'name': title}) + '/]' + title + '[/a]'
vod = {
'vod_name': vn,
'vod_director': pdtitle,
'vod_remarks': data('.rb-new__info').text(),
'vod_play_from': 'Xhamster',
'vod_play_url': ''
}
try:
plist = []
d = djs['xplayerSettings']['sources']
f = d.get('standard')
def get_sort_key(url):
quality = url.split('$')[0]
number = ''.join(filter(str.isdigit, quality))
number = int(number) if number else 0
return -number, quality
if f:
for key, value in f.items():
if isinstance(value, list):
for info in value:
id = self.e64(f'{0}@@@@{info.get("url") or info.get("fallback")}')
plist.append(f"{info.get('label') or info.get('quality')}${id}")
plist.sort(key=get_sort_key)
if d.get('hls'):
for format_type, info in d['hls'].items():
if url := info.get('url'):
encoded = self.e64(f'{0}@@@@{url}')
plist.append(f"{format_type}${encoded}")
except Exception as e:
plist = [f"{vn}${self.e64(f'{1}@@@@{ids[0]}')}"]
print(f"获取视频信息失败: {str(e)}")
vod['vod_play_url'] = '#'.join(plist)
return {'list': [vod]}
def searchContent(self, key, quick, pg="1"):
data = self.getpq(f'/search/{key}?page={pg}')
return {'list': self.getlist(data(".thumb-list--sidebar .thumb-list__item")), 'page': pg}
def playerContent(self, flag, id, vipFlags):
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.5410.0 Safari/537.36',
'pragma': 'no-cache',
'cache-control': 'no-cache',
'sec-ch-ua-platform': '"Windows"',
'sec-ch-ua': '"Not(A:Brand";v="99", "Google Chrome";v="133", "Chromium";v="133"',
'dnt': '1',
'sec-ch-ua-mobile': '?0',
'origin': self.host,
'sec-fetch-site': 'cross-site',
'sec-fetch-mode': 'cors',
'sec-fetch-dest': 'empty',
'referer': f'{self.host}/',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8',
'priority': 'u=1, i',
}
ids = self.d64(id).split('@@@@')
return {'parse': int(ids[0]), 'url': f'{self.proxy}{ids[1]}', 'header': headers}
def localProxy(self, param):
pass
def gethost(self):
try:
response = self.fetch(f'{self.proxy}https://xhamster.com', headers=self.headers, allow_redirects=False)
return response.headers['Location']
except Exception as e:
print(f"获取主页失败: {str(e)}")
return "https://zn.xhamster.com"
def e64(self, text):
try:
text_bytes = text.encode('utf-8')
encoded_bytes = b64encode(text_bytes)
return encoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64编码错误: {str(e)}")
return ""
def d64(self, encoded_text):
try:
encoded_bytes = encoded_text.encode('utf-8')
decoded_bytes = b64decode(encoded_bytes)
return decoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64解码错误: {str(e)}")
return ""
def getlist(self, data):
vlist = []
for i in data.items():
vlist.append({
'vod_id': i('.role-pop').attr('href'),
'vod_name': i('.video-thumb-info a').text(),
'vod_pic': i('.role-pop img').attr('src'),
'vod_year': i('.video-thumb-info .video-thumb-views').text().split(' ')[0],
'vod_remarks': i('.role-pop div[data-role="video-duration"]').text(),
'style': {'ratio': 1.33, 'type': 'rect'}
})
return vlist
def getpq(self, path=''):
h = '' if path.startswith('http') else self.host
response = self.session.get(f'{self.proxy}{h}{path}').text
try:
return pq(response)
except Exception as e:
print(f"{str(e)}")
return pq(response.encode('utf-8'))
def getjsdata(self, data):
vhtml = data("script[id='initials-script']").text()
jst = json.loads(vhtml.split('initials=')[-1][:-1])
return jst

106
js/py/base/local.py Executable file
View File

@ -0,0 +1,106 @@
#coding=utf-8
#!/usr/bin/python
from re import sub
from requests import get
from urllib.parse import unquote
from threading import Thread, Event
from socketserver import ThreadingMixIn
from urllib.parse import urlparse, parse_qs
from importlib.machinery import SourceFileLoader
from http.server import BaseHTTPRequestHandler, HTTPServer
cache = {}
class ProxyServer(BaseHTTPRequestHandler):
def do_GET(self):
urlParts = urlparse(self.path)
queryQarams = parse_qs(urlParts.query)
do = queryQarams['do'][0]
try:
key = queryQarams['key'][0]
except:
key = ''
try:
value = queryQarams['value'][0]
except:
value = ''
if do == 'set':
cache[key] = value
self.send_response(200)
self.end_headers()
if do == 'get':
self.send_response(200)
self.end_headers()
if key in cache:
self.wfile.write(cache[key].encode())
elif do == 'delete':
cache.pop(key, None)
self.send_response(200)
self.end_headers()
else:
self.send_response(200)
self.end_headers()
def do_POST(self):
urlParts = urlparse(self.path)
queryQarams = parse_qs(urlParts.query)
key = queryQarams['key'][0]
try:
contentLength = int(self.headers.get('Content-Length', 0))
value = self.rfile.read(contentLength).decode().replace('+', ' ')
value = sub(r'value=(.*?)', '', unquote(value))
except:
value = ''
cache[key] = value
self.send_response(200)
self.end_headers()
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
"""Handle requests in a separate thread."""
def serveForever(event):
try:
while not event.is_set():
ThreadedHTTPServer(('0.0.0.0', 9978), ProxyServer).handle_request()
ThreadedHTTPServer(('0.0.0.0', 9978), ProxyServer).server_close()
except Exception as erro:
print(erro)
finally:
ThreadedHTTPServer(('0.0.0.0', 9978), ProxyServer).server_close()
def loadFromDisk(fileName):
name = fileName.split('/')[-1].split('.')[0]
sp = SourceFileLoader(name, fileName).load_module().Spider()
return sp
def run(fileName, proxy=False):
event = Event()
if proxy:
thread = Thread(target=serveForever, args=(event,), name='localProxy')
thread.start()
sp = loadFromDisk(f'../plugin/{fileName}.py') #载入本地脚本
sp.init('') # 初始化
try:
# formatJo = sp.decode('')
# formatJo = sp.homeContent(True) # 主页
# formatJo = sp.homeVideoContent() # 主页视频
formatJo = sp.searchContentPage("繁花", False, '1') # 搜索
# formatJo = sp.categoryContent('bilibili', 1, False, {}) # 分类
# formatJo = sp.detailContent(['']) # 详情
# formatJo = sp.playerContent("", '', {}) # 播放
# formatJo = sp.localProxy({}) # 本地代理
print(formatJo)
except Exception as erro:
print(erro)
finally:
event.set()
try:
get('http://127.0.0.1:9978/cache?do=none')
except:
pass
if __name__ == '__main__':
"""
run(PY爬虫文件名, 是否启用本地代理)
再去run函数中修改函数参数
"""
run('py_bilibilivd', True)

6
js/py/base/localProxy.py Executable file
View File

@ -0,0 +1,6 @@
class Proxy:
def getUrl(self, local):
return 'http://127.0.0.1:9978'
def getPort(self):
return 9978

151
js/py/base/spider.py Executable file
View File

@ -0,0 +1,151 @@
import re
import os
import json
import time
import requests
from lxml import etree
from abc import abstractmethod, ABCMeta
from importlib.machinery import SourceFileLoader
from base.localProxy import Proxy
class Spider(metaclass=ABCMeta):
_instance = None
def __init__(self):
self.extend = ''
def __new__(cls, *args, **kwargs):
if cls._instance:
return cls._instance
else:
cls._instance = super().__new__(cls)
return cls._instance
@abstractmethod
def init(self, extend=""):
pass
def homeContent(self, filter):
pass
def homeVideoContent(self):
pass
def categoryContent(self, tid, pg, filter, extend):
pass
def detailContent(self, ids):
pass
def searchContent(self, key, quick, pg="1"):
pass
def playerContent(self, flag, id, vipFlags):
pass
def liveContent(self, url):
pass
def localProxy(self, param):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def action(self, action):
pass
def destroy(self):
pass
def getName(self):
pass
def getDependence(self):
return []
def loadSpider(self, name):
return self.loadModule(name).Spider()
def loadModule(self, name):
path = os.path.join(os.path.join("../plugin"), f'{name}.py')
return SourceFileLoader(name, path).load_module()
def regStr(self, reg, src, group=1):
m = re.search(reg, src)
src = ''
if m:
src = m.group(group)
return src
def removeHtmlTags(self, src):
clean = re.compile('<.*?>')
return re.sub(clean, '', src)
def cleanText(self, src):
clean = re.sub('[\U0001F600-\U0001F64F\U0001F300-\U0001F5FF\U0001F680-\U0001F6FF\U0001F1E0-\U0001F1FF]', '',
src)
return clean
def fetch(self, url, params=None, cookies=None, headers=None, timeout=5, verify=True, stream=False,
allow_redirects=True):
rsp = requests.get(url, params=params, cookies=cookies, headers=headers, timeout=timeout, verify=verify,
stream=stream, allow_redirects=allow_redirects)
rsp.encoding = 'utf-8'
return rsp
def post(self, url, params=None, data=None, json=None, cookies=None, headers=None, timeout=5, verify=True,
stream=False, allow_redirects=True):
rsp = requests.post(url, params=params, data=data, json=json, cookies=cookies, headers=headers, timeout=timeout,
verify=verify, stream=stream, allow_redirects=allow_redirects)
rsp.encoding = 'utf-8'
return rsp
def html(self, content):
return etree.HTML(content)
def str2json(str):
return json.loads(str)
def json2str(str):
return json.dumps(str, ensure_ascii=False)
def getProxyUrl(self, local=True):
return f'{Proxy.getUrl(local)}?do=py'
def log(self, msg):
if isinstance(msg, dict) or isinstance(msg, list):
print(json.dumps(msg, ensure_ascii=False))
else:
print(f'{msg}')
def getCache(self, key):
value = self.fetch(f'http://127.0.0.1:{Proxy.getPort()}/cache?do=get&key={key}', timeout=5).text
if len(value) > 0:
if value.startswith('{') and value.endswith('}') or value.startswith('[') and value.endswith(']'):
value = json.loads(value)
if type(value) == dict:
if not 'expiresAt' in value or value['expiresAt'] >= int(time.time()):
return value
else:
self.delCache(key)
return None
return value
else:
return None
def setCache(self, key, value):
if type(value) in [int, float]:
value = str(value)
if len(value) > 0:
if type(value) == dict or type(value) == list:
value = json.dumps(value, ensure_ascii=False)
r = self.post(f'http://127.0.0.1:{Proxy.getPort()}/cache?do=set&key={key}', data={"value": value}, timeout=5)
return 'succeed' if r.status_code == 200 else 'failed'
def delCache(self, key):
r = self.fetch(f'http://127.0.0.1:{Proxy.getPort()}/cache?do=del&key={key}', timeout=5)
return 'succeed' if r.status_code == 200 else 'failed'

352
js/py/plugin/adult/51吸瓜.py Executable file
View File

@ -0,0 +1,352 @@
# -*- coding: utf-8 -*-
# by @嗷呜
import json
import random
import re
import sys
import threading
import time
from base64 import b64decode, b64encode
from urllib.parse import urlparse
import requests
from Crypto.Cipher import AES
from Crypto.Util.Padding import unpad
from pyquery import PyQuery as pq
sys.path.append('..')
from base.spider import Spider
class Spider(Spider):
def init(self, extend=""):
try:self.proxies = json.loads(extend)
except:self.proxies = {}
self.headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36',
'Pragma': 'no-cache',
'Cache-Control': 'no-cache',
'sec-ch-ua-platform': '"macOS"',
'sec-ch-ua': '"Not/A)Brand";v="8", "Chromium";v="134", "Google Chrome";v="134"',
'DNT': '1',
'sec-ch-ua-mobile': '?0',
'Origin': '',
'Sec-Fetch-Site': 'cross-site',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Dest': 'empty',
'Accept-Language': 'zh-CN,zh;q=0.9',
}
self.host=self.host_late(self.gethosts())
self.headers.update({'Origin': self.host, 'Referer': f"{self.host}/"})
self.getcnh()
pass
def getName(self):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def destroy(self):
pass
def homeContent(self, filter):
data=self.getpq(requests.get(self.host, headers=self.headers,proxies=self.proxies).text)
result = {}
classes = []
for k in data('.category-list ul li').items():
classes.append({
'type_name': k('a').text(),
'type_id': k('a').attr('href')
})
result['class'] = classes
result['list'] = self.getlist(data('#index article a'))
return result
def homeVideoContent(self):
pass
def categoryContent(self, tid, pg, filter, extend):
if '@folder' in tid:
id = tid.replace('@folder', '')
videos = self.getfod(id)
else:
data = self.getpq(requests.get(f"{self.host}{tid}{pg}", headers=self.headers, proxies=self.proxies).text)
videos = self.getlist(data('#archive article a'), tid)
result = {}
result['list'] = videos
result['page'] = pg
result['pagecount'] = 1 if '@folder' in tid else 99999
result['limit'] = 90
result['total'] = 999999
return result
def detailContent(self, ids):
url=f"{self.host}{ids[0]}"
data=self.getpq(requests.get(url, headers=self.headers,proxies=self.proxies).text)
vod = {'vod_play_from': '51吸瓜'}
try:
clist = []
if data('.tags .keywords a'):
for k in data('.tags .keywords a').items():
title = k.text()
href = k.attr('href')
clist.append('[a=cr:' + json.dumps({'id': href, 'name': title}) + '/]' + title + '[/a]')
vod['vod_content'] = ' '.join(clist)
except:
vod['vod_content'] = data('.post-title').text()
try:
plist=[]
if data('.dplayer'):
for c, k in enumerate(data('.dplayer').items(), start=1):
config = json.loads(k.attr('data-config'))
plist.append(f"视频{c}${config['video']['url']}")
vod['vod_play_url']='#'.join(plist)
except:
vod['vod_play_url']=f"请停止活塞运动,可能没有视频${url}"
return {'list':[vod]}
def searchContent(self, key, quick, pg="1"):
data=self.getpq(requests.get(f"{self.host}/search/{key}/{pg}", headers=self.headers,proxies=self.proxies).text)
return {'list':self.getlist(data('#archive article a')),'page':pg}
def playerContent(self, flag, id, vipFlags):
p=1
if '.m3u8' in id:p,id=0,self.proxy(id)
return {'parse': p, 'url': id, 'header': self.headers}
def localProxy(self, param):
if param.get('type') == 'img':
res=requests.get(param['url'], headers=self.headers, proxies=self.proxies, timeout=10)
return [200,res.headers.get('Content-Type'),self.aesimg(res.content)]
elif param.get('type') == 'm3u8':return self.m3Proxy(param['url'])
else:return self.tsProxy(param['url'])
def proxy(self, data, type='m3u8'):
if data and len(self.proxies):return f"{self.getProxyUrl()}&url={self.e64(data)}&type={type}"
else:return data
def m3Proxy(self, url):
url=self.d64(url)
ydata = requests.get(url, headers=self.headers, proxies=self.proxies, allow_redirects=False)
data = ydata.content.decode('utf-8')
if ydata.headers.get('Location'):
url = ydata.headers['Location']
data = requests.get(url, headers=self.headers, proxies=self.proxies).content.decode('utf-8')
lines = data.strip().split('\n')
last_r = url[:url.rfind('/')]
parsed_url = urlparse(url)
durl = parsed_url.scheme + "://" + parsed_url.netloc
iskey=True
for index, string in enumerate(lines):
if iskey and 'URI' in string:
pattern = r'URI="([^"]*)"'
match = re.search(pattern, string)
if match:
lines[index] = re.sub(pattern, f'URI="{self.proxy(match.group(1), "mkey")}"', string)
iskey=False
continue
if '#EXT' not in string:
if 'http' not in string:
domain = last_r if string.count('/') < 2 else durl
string = domain + ('' if string.startswith('/') else '/') + string
lines[index] = self.proxy(string, string.split('.')[-1].split('?')[0])
data = '\n'.join(lines)
return [200, "application/vnd.apple.mpegur", data]
def tsProxy(self, url):
url = self.d64(url)
data = requests.get(url, headers=self.headers, proxies=self.proxies, stream=True)
return [200, data.headers['Content-Type'], data.content]
def e64(self, text):
try:
text_bytes = text.encode('utf-8')
encoded_bytes = b64encode(text_bytes)
return encoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64编码错误: {str(e)}")
return ""
def d64(self, encoded_text):
try:
encoded_bytes = encoded_text.encode('utf-8')
decoded_bytes = b64decode(encoded_bytes)
return decoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64解码错误: {str(e)}")
return ""
def gethosts(self):
url = 'https://51cg.fun'
curl = self.getCache('host_51cn')
if curl:
try:
data = self.getpq(requests.get(curl, headers=self.headers, proxies=self.proxies).text)('a').attr('href')
if data:
parsed_url = urlparse(data)
url = parsed_url.scheme + "://" + parsed_url.netloc
except:
pass
try:
html = self.getpq(requests.get(url, headers=self.headers, proxies=self.proxies).text)
html_pattern = r"Base64\.decode\('([^']+)'\)"
html_match = re.search(html_pattern, html('script').eq(-1).text(), re.DOTALL)
if not html_match: raise Exception("未找到html")
html = self.getpq(b64decode(html_match.group(1)).decode())('script').eq(-4).text()
return self.hstr(html)
except Exception as e:
self.log(f"获取: {str(e)}")
return ""
def getcnh(self):
data=self.getpq(requests.get(f"{self.host}/ybml.html", headers=self.headers,proxies=self.proxies).text)
url=data('.post-content[itemprop="articleBody"] blockquote p').eq(0)('a').attr('href')
parsed_url = urlparse(url)
host = parsed_url.scheme + "://" + parsed_url.netloc
self.setCache('host_51cn',host)
def hstr(self, html):
pattern = r"(backupLine\s*=\s*\[\])\s+(words\s*=)"
replacement = r"\1, \2"
html = re.sub(pattern, replacement, html)
data = f"""
var Vx = {{
range: function(start, end) {{
const result = [];
for (let i = start; i < end; i++) {{
result.push(i);
}}
return result;
}},
map: function(array, callback) {{
const result = [];
for (let i = 0; i < array.length; i++) {{
result.push(callback(array[i], i, array));
}}
return result;
}}
}};
Array.prototype.random = function() {{
return this[Math.floor(Math.random() * this.length)];
}};
var location = {{
protocol: "https:"
}};
function executeAndGetResults() {{
var allLines = lineAry.concat(backupLine);
var resultStr = JSON.stringify(allLines);
return resultStr;
}};
{html}
executeAndGetResults();
"""
return self.p_qjs(data)
def p_qjs(self, js_code):
try:
from com.whl.quickjs.wrapper import QuickJSContext
ctx = QuickJSContext.create()
result_json = ctx.evaluate(js_code)
ctx.destroy()
return json.loads(result_json)
except Exception as e:
self.log(f"执行失败: {e}")
return []
def host_late(self, url_list):
if isinstance(url_list, str):
urls = [u.strip() for u in url_list.split(',')]
else:
urls = url_list
if len(urls) <= 1:
return urls[0] if urls else ''
results = {}
threads = []
def test_host(url):
try:
start_time = time.time()
response = requests.head(url,headers=self.headers,proxies=self.proxies,timeout=1.0, allow_redirects=False)
delay = (time.time() - start_time) * 1000
results[url] = delay
except Exception as e:
results[url] = float('inf')
for url in urls:
t = threading.Thread(target=test_host, args=(url,))
threads.append(t)
t.start()
for t in threads:
t.join()
return min(results.items(), key=lambda x: x[1])[0]
def getlist(self, data, tid=''):
videos = []
l = '/mrdg' in tid
for k in data.items():
a = k.attr('href')
b = k('h2').text()
c = k('span[itemprop="datePublished"]').text()
if a and b and c:
videos.append({
'vod_id': f"{a}{'@folder' if l else ''}",
'vod_name': b.replace('\n', ' '),
'vod_pic': self.getimg(k('script').text()),
'vod_remarks': c,
'vod_tag': 'folder' if l else '',
'style': {"type": "rect", "ratio": 1.33}
})
return videos
def getfod(self, id):
url = f"{self.host}{id}"
data = self.getpq(requests.get(url, headers=self.headers, proxies=self.proxies).text)
vdata=data('.post-content[itemprop="articleBody"]')
r=['.txt-apps','.line','blockquote','.tags','.content-tabs']
for i in r:vdata.remove(i)
p=vdata('p')
videos=[]
for i,x in enumerate(vdata('h2').items()):
c=i*2
videos.append({
'vod_id': p.eq(c)('a').attr('href'),
'vod_name': p.eq(c).text(),
'vod_pic': f"{self.getProxyUrl()}&url={p.eq(c+1)('img').attr('data-xkrkllgl')}&type=img",
'vod_remarks':x.text()
})
return videos
def getimg(self, text):
match = re.search(r"loadBannerDirect\('([^']+)'", text)
if match:
url = match.group(1)
return f"{self.getProxyUrl()}&url={url}&type=img"
else:
return ''
def aesimg(self, word):
key = b'f5d965df75336270'
iv = b'97b60394abc2fbe1'
cipher = AES.new(key, AES.MODE_CBC, iv)
decrypted = unpad(cipher.decrypt(word), AES.block_size)
return decrypted
def getpq(self, data):
try:
return pq(data)
except Exception as e:
print(f"{str(e)}")
return pq(data.encode('utf-8'))

165
js/py/plugin/adult/DSYS.py Executable file
View File

@ -0,0 +1,165 @@
# -*- coding: utf-8 -*-
# by @嗷呜
import time
import uuid
from base64 import b64decode, b64encode
import json
import sys
from urllib.parse import urlparse, urlunparse
from Crypto.Cipher import AES
from Crypto.Hash import MD5
from Crypto.Util.Padding import unpad, pad
sys.path.append('..')
from base.spider import Spider
class Spider(Spider):
def init(self, extend=""):
pass
def getName(self):
pass
def isVideoFormat(self, url):
pass
def manualVideoCheck(self):
pass
def destroy(self):
pass
host = "https://api.230110.xyz"
phost = "https://cdn.230110.xyz"
headers = {
'origin': host,
'referer': f'{host}/',
'user-agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 17_0_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.8 Mobile/15E148 Safari/604.1',
}
def homeContent(self, filter):
data='9XSPkyFMrOOG34JSg//ZosMof45cyBo9hwZMZ5rvI6Yz/ZZlXWIf8/644OzwW+FNIOdJ61R/Lxjy1tqN+ZzokxtiVzb8LjYAkh6GFudwAUXFt9yS1ZjAxC3tDKrQsJQLk3nym0s00DBBzLBntRBDFz7nbba+OOBuQOZpL3CESGL42l4opdoViQLhO/dIizY1kIOk2NxxpDC9Z751gPl1ctHWuLWhuLG/QWgNWi/iHScjKrMHJKcC9GQHst/4Q3dgZ03eQIIVB6jvoV1XXoBCz6fjM/jM3BXpzSttT4Stglwy93gWuNWuZiKypHK2Q0lO10oM0ceRW2a0fPGId+rNYMRO3cR/C0ZueD4cmTAVOuxVr9ZZSP8/nhD0bHyAPONXtchIDJb0O/kdFHk2KTJfQ5q4fHOyzezczc4iQDV/R0S8cGZKM14MF+wytA/iljfj43H0UYqq5pM+MCUGRTdYEtuxCp0+A+DiOhNZwY/Km/TgBoGZQWGbpljJ2LAVnWhxX+ickLH7zuR/FeIwP/R8zOuR+8C8UlT9eHTqtvfNzaGdFxt316atHy8TNjRO7J5a177mqsHs3ziG0toDDzLDCbhRUjFgVA3ktahhXiWaaCo/ZGSJAA8TDO5DYqnJ0JDaX0ILPj8QB5zxrHYmRE8PboIr3RBAjz1sREbaHfjrUjoh29ePhlolLV00EvgoxP5knaqt5Ws/sq5IG57qKCAPgqXzblPLHToJGBtukKhLp8jbGJrkb6PVn4/jysks0NGE'
return {'class':self.aes(data,False)}
def homeVideoContent(self):
pass
def categoryContent(self, tid, pg, filter, extend):
data = {"q": "", "filter": [f"type_id = {tid}"], "offset": (int(pg)-1) * 24, "limit": 24, "sort": ["video_time:desc"],"lang": "zh-cn", "route": "/videos/search"}
result = {}
if 'skey_' in tid:return self.searchContent(tid.split('_')[-1], True, pg)
result['list'] = self.getl(self.getdata(data))
result['page'] = pg
result['pagecount'] = 9999
result['limit'] = 90
result['total'] = 999999
return result
def detailContent(self, ids):
data={"limit":1,"filter":[f"video_id = {ids[0]}"],"lang":"zh-cn","route":"/videos/search"}
res = self.getdata(data)[0]
purl=urlunparse(urlparse(self.phost)._replace(path=urlparse(res.get('video_url')).path))
vod = {
'vod_play_from': 'dsysav',
'vod_play_url': f"{res.get('video_duration')}${purl}"
}
if res.get('video_tag'):
clist = []
tags=res['video_tag'].split(',')
for k in tags:
clist.append('[a=cr:' + json.dumps({'id': f'skey_{k}', 'name': k}) + '/]' + k + '[/a]')
vod['vod_content'] = ' '.join(clist)
return {'list':[vod]}
def searchContent(self, key, quick, pg="1"):
data={"q":key,"filter":[],"offset":(int(pg)-1) * 24,"limit":24,"sort":["video_time:desc"],"lang":"zh-cn","route":"/videos/search"}
return {'list':self.getl(self.getdata(data)),'page':pg}
def playerContent(self, flag, id, vipFlags):
if id.endswith('.mpd'):
id=f"{self.getProxyUrl()}&url={self.e64(id)}&type=mpd"
return {'parse': 0, 'url': id, 'header':self.headers}
def localProxy(self, param):
if param.get('type') and param['type']=='mpd':
url = self.d64(param.get('url'))
ids=url.split('/')
id=f"{ids[-3]}/{ids[-2]}/"
xpu = f"{self.getProxyUrl()}&path=".replace('&', '&amp;')
data = self.fetch(url, headers=self.headers).text
data = data.replace('initialization="', f'initialization="{xpu}{id}').replace('media="',f'media="{xpu}{id}')
return [200,'application/octet-stream',data]
else:
hsign=self.md5(f"AjPuom638LmWfWyeM5YueKuJ9PuWLdRn/mpd/{param.get('path')}1767196800")
bytes_data = bytes.fromhex(hsign)
sign = b64encode(bytes_data).decode('utf-8').replace('=','').replace('+','-').replace('/','_')
url=f"{self.phost}/mpd/{param.get('path')}?sign={sign}&expire=1767196800"
return [302,'text/plain',None,{'Location':url}]
def liveContent(self, url):
pass
def aes(self, text, operation=True):
key = b'OPQT123412FRANME'
iv = b'MRDCQP12QPM13412'
cipher = AES.new(key, AES.MODE_CBC, iv)
if operation:
ct_bytes = cipher.encrypt(pad(json.dumps(text).encode("utf-8"), AES.block_size))
ct = b64encode(ct_bytes).decode("utf-8")
return ct
else:
pt = unpad(cipher.decrypt(b64decode(text)), AES.block_size)
return json.loads(pt.decode("utf-8"))
def e64(self, text):
try:
text_bytes = text.encode('utf-8')
encoded_bytes = b64encode(text_bytes)
return encoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64编码错误: {str(e)}")
return ""
def d64(self,encoded_text):
try:
encoded_bytes = encoded_text.encode('utf-8')
decoded_bytes = b64decode(encoded_bytes)
return decoded_bytes.decode('utf-8')
except Exception as e:
print(f"Base64解码错误: {str(e)}")
return ""
def md5(self, text):
h = MD5.new()
h.update(text.encode('utf-8'))
return h.hexdigest()
def getl(self,data):
videos = []
for i in data:
img = i.get('video_cover')
if img and 'http' in img:img = urlunparse(urlparse(self.phost)._replace(path=urlparse(img).path))
videos.append({
'vod_id': i.get('video_id'),
'vod_name': i.get('video_title'),
'vod_pic': img,
'vod_remarks': i.get('video_duration'),
'style': {"type": "rect", "ratio": 1.33}
})
return videos
def getdata(self,data):
uid = str(uuid.uuid4())
t = int(time.time())
json_data = {
'sign': self.md5(f"{self.e64(json.dumps(data))}{uid}{t}AjPuom638LmWfWyeM5YueKuJ9PuWLdRn"),
'nonce': uid,
'timestamp': t,
'data': self.aes(data),
}
res = self.post(f"{self.host}/v1", json=json_data, headers=self.headers).json()
res = self.aes(res['data'], False)
return res

Some files were not shown because too many files have changed in this diff Show More