admin管理员组

文章数量:1546062

centos7 install git2.x

wget https://packages.endpointdev/rhel/7/os/x86_64/git-core-doc-2.38.1-1.ep7.noarch.rpm
wget https://packages.endpointdev/rhel/7/os/x86_64/git-all-2.38.1-1.ep7.noarch.rpm
wget https://packages.endpointdev/rhel/7/os/x86_64/git-2.38.1-1.ep7.x86_64.rpm
wget https://packages.endpointdev/rhel/7/os/x86_64/git-core-2.38.1-1.ep7.x86_64.rpm
wget https://packages.endpointdev/rhel/7/os/x86_64/perl-Git-2.38.1-1.ep7.noarch.rpm
yum -y install perl-Git-2.38.1-1.ep7.noarch.rpm  git-2.38.1-1.ep7.x86_64.rpm  git-core-doc-2.38.1-1.ep7.noarch.rpm

kernel header not found for target kernel: vbox guest iso安装

0. 背景

-1. 如果不安装guest iso中的驱动,虚拟机centos7中的网络带宽被限制在2MByte左右,安装完后虚拟机centos7的网速正常
0. vbox中的虚拟机操作系统为 centos 7 x64

  1. vbox 选择 : “设备” --> “安装增强功能”, 以加载guset iso 到 虚拟机的 /dev/cdrom
  2. 试着安装 guest iso中的驱动
mkdir /media/cd/
mount /dev/cdrom /media/cd
cd /media/cd/
./VBoxLinuxAdditions.run  #此时报错  kernel header not found for target kernel

1. 报错如下:

./VBoxLinuxAdditions.run
"""Verifying archive integrity...  100%   MD5 checksums are OK. All good.
Uncompressing VirtualBox 7.0.6 Guest Additions for Linux  100%
VirtualBox Guest Additions installer
Removing installed version 7.0.6 of VirtualBox Guest Additions...
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Setting up modules
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel
modules.  This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup <version>
VirtualBox Guest Additions: or
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup all
VirtualBox Guest Additions: Kernel headers not found for target kernel
3.10.0-1160.el7.x86_64. Please install them and execute
  /sbin/rcvboxadd setup
modprobe vboxguest failed
The log file /var/log/vboxadd-setup.log may contain further information.
"""

2. 解决报错

uname -r
#3.10.0-1160.el7.x86_64
yum -y install gcc make  #没有gcc报错也是  :  kernel header not found for target kernel,  并不会报没有gcc
yum -y install kernel-devel-`uname -r` kernel-headers-`uname -r` 

#/sbin/rcvboxadd quicksetup  `uname -r`  #如果报错中的版本 和 uname -r 版本不一致, 运行此行

reboot #重启虚拟机后重新安装guest iso

cd /media/cd/
./VBoxLinuxAdditions.run  #正常安装


go0gle c0lab ssh

1. nggr0k注册并拿到token

nggr0k注册并拿到token : 0–>o

2. coolab安装coolab_ssh

# google coolab有检测coolab_ssh ,所以要劈开
!pip install `echo col``echo ab``echo _ssh`

nggr0kToken='xxx'
paswd='123'
from c0lab_ssh import launch_ssh, init_git
launch_ssh(nggr0kToken,paswd)


#输出类似如下:
"""

  Host google_coolab_ssh
    HostName 1.tcp.nggr0k.io
    User root
    Port 11234
"""

3. using online web ssh client

sshgate

4. style transform : VToonify

VToonify : 也是以下脚本的参考来源

git clone https://github/williamyang1991/VToonify.git
cd VToonify
wget https://repo.anaconda/miniconda/Miniconda3-latest-Linux-x86_64.sh
source /root/miniconda3/bin/activate
conda env create -f ./environment/vtoonify_env.yaml

注意:claash 实际多写了一个a, prooxychains实际多写了一个o

linux 下prooxy

claash

claash v1.9.0 page
claash-linux-amd64-v1.9.0.gz

GeoLite.mmdb page
GeoLite2-Country.mmdb

“WARN[0000] MMDB invalid, remove and download”
ref1
ref2

修改 1670336651611.yml 修改外部控制端口 为 7777

1670336651611.yml: 从claash for windows中导出

#...
external-controller: 7777
#...
cd /claash-home/

#此即解决claash启动报错:"WARN[0000] MMDB invalid, remove and download"
wget https://github/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-Country.mmdb  -O Country.mmdb


wget https://github/Dreamacro/claash/releases/download/v1.9.0/claash-linux-amd64-v1.9.0.gz
gzip -d claash-linux-amd64-v1.9.0.gz



#1670336651611.yml: 从claash for windows中导出
./claash-linux-amd64-v1.9.0 -d . -f 1670336651611.yml  -ext-ctl  10.11.1.107:7777
"""
INFO[0000] Start initial compatible provider 最新域名
INFO[0000] Start initial compatible provider 动画疯
INFO[0000] Start initial compatible provider 故障切换
INFO[0000] Start initial compatible provider 自动选择
INFO[0000] Start initial compatible provider 手动选择
INFO[0000] Start initial compatible provider 节点选择
INFO[0000] Start initial compatible provider 国外网站
INFO[0000] Start initial compatible provider 电报吹水
INFO[0000] HTTP proxy listening at: [::]:7890
INFO[0000] RESTful API listening at: [::]:9090
INFO[0000] SOCKS proxy listening at: [::]:7891
ERRO[0000] Start DNS server error: missing port in address
"""

claash dns配置(对 原配置文件1670336651611.yml 增加dns配置)

dns:
  enable: true
  ipv6: false
  enhanced-mode: fake-ip
  fake-ip-range: 192.168.0.1/24
  fake-ip-filter:
    - "*.lan"
    - "*.local"
    - dns.msftncsi
    - www.msftncsi
    - www.msftconnecttest
    - stun.*.*.*
    - stun.*.*
    - miwifi
    - music.163
    - "*.music.163"
    - "*.126"
    - api-jooxtt.sanook
    - api.joox
    - joox
    - y.qq
    - "*.y.qq"
    - streamoc.music.tc.qq
    - mobileoc.music.tc.qq
    - isure.stream.qqmusic.qq
    - dl.stream.qqmusic.qq
    - aqqmusic.tc.qq
    - amobile.music.tc.qq
    - "*.xiami"
    - "*.music.migu"
    - music.migu
    - netis
    - router.asus
    - repeater.asus
    - routerlogin
    - routerlogin
    - tendawifi
    - tendawifi
    - tplinklogin
    - tplinkwifi
    - tplinkrepeater
    - "*.ntp"
    - "*.openwrt.pool.ntp"
    - "*.msftconnecttest"
    - "*.msftncsi"
    - localhost.ptlogin2.qq
    - "*.*.*.srv.nintendo"
    - "*.*.stun.playstation"
    - xbox.*.*.microsoft
    - "*.ipv6.microsoft"
    - "*.*.xboxlive"
    - speedtest.cros.wr.pvp
  default-nameserver:
    - 10.11.1.1
    - 114.114.114.114
  nameserver:
    - https://doh.pub/dns-query
    - https://dns.alidns/dns-query
    - https://dns.google/dns-query
#以下是1670336651611.yml的原本内容
port: 7890
socks-port: 7891

即启动了claash的dns解析, 以上报错"ERRO[0000] Start DNS server error: missing port in address"会消失

linux claash 的web client(http://claash.razord.top/#/proxies) (连接本地linux claash的external-controller端口7777)

控制 linux claash , 打开 http://claash.razord.top ,
操作和claash for windows一样的

本地chrome 打开 http://claash.razord.top/#/proxies,

确保 本地chrome 已经安装了 switchyOmega插件 (连接到本地windows claash以科学上网)
注意 要设置 switchyOmega插件 强调: 10.11.1.107 不需要用代理
输入以上 10.11.1.107 和 7777, secret留空 即可 连上 linux中的claash
修改 claash on linux使用某个能正常连接的代理点

proxy (prooxychains)

ref

配置文件 : /etc/prooxychains.conf 内容如下:

#file:/etc/prooxychains.conf
strict_chain
#proxy_dns #这行去掉, dns交给claash

tcp_read_time_out 15000
tcp_connect_time_out 8000

[ProxyList]

socks5 127.0.0.1 7891
#注意看以上claash启动日志, claash 的socks5端口是7891 而不是7890
#使用举例
prooxychains  curl https://www.google
prooxychains  w3m  https://www.google

docker pull 无法使用 prooxychains4解决

mkdir -p /etc/systemd/system/docker.service.d

vim /etc/systemd/system/docker.service.d/https-proxy.conf

[Service]
Environment="HTTPS_PROXY=socks5://127.0.0.1:1080"

prooxychains4 docker pull fedora:36

frida

frida install

pip install frida==16.0.7 frida-tools==12.0.4

frida script debug没什么用(v8运行时 中有的变量,在真实运行时中根本没有)

frida -f d:\instrmcpp\dork\cmake-build-debug\dork.exe --debug --runtime=v8 -l d:\frida-home\frida-agent-4instrmcpp\attach_operator_new.js --pause



frida 调试 script步骤如下:

但此方法调试, args 其内容 估计和 不加 runtime=v8 不大一样
frida 的args是个数组, 但frida不知道其长度, 只能自己决定其长度(比如从调试信息中拿到其长度、或从源码中拿到其长度)
args没有长度 ,
args没有长度具体回答

frida script example

frida -f /machine_learning-home/shark/build_debug/bin/Statistics  --debug --pause
ls=DebugSymbol.findFunctionsMatching("*C2Ev").map(functionAddressK => DebugSymbol.fromAddress(functionAddressK));
/*ls[0]
{
    "address": "0x7f03b53c2230",
    "column": 0,
    "fileName": "",
    "lineNumber": 0,
    "moduleName": "libstdc++.so.6.0.28",
    "name": "_ZNSt14error_categoryC2Ev"
}
*/

moduleLs = ls.reduce((s, k) => { if (!s.includes(k.moduleName))   return [...s, k.moduleName]; else  return s; }, []);
/*
[
    "libstdc++.so.6.0.28",
    "Statistics"
]
*/

funcLs_Statistics=ls.filter(fK=>fK.moduleName=="Statistics");


frida 运行的应用如何带参数

  1. 参见:frida_run_app.sh

  2. 用frida拦截应用程序内的main函数并用frida添加参数, 以给应用程序提供更通用的参数
    参考: frida-spawn-a-windows-linux-process-with-command-line-arguments/72880066#72880066

“Failed to load script: timeout was reached” 解决办法

参考: frida script timeout 解决办法

//假设本文件名为 example.js
setTimeout(function () {
  //把原有的js代码移到这里, 即不会再报错误:"Failed to load script: timeout was reached"


}, 0);

#运行frida, 注入脚本  example.js,  由于 example.js 中指定了不会超时,  因此frida不会再报错误:"Failed to load script: timeout was reached"
frida -f xxx_dork.exe --load example.js

dork list (possible)

ml dork

Top 10 Libraries In C/C++ For Machine Learning

autograd

neural-network

caffe

CNTK

mlpack

dynet

flashlight

MegEngine

下面的看着就难编译:

tensorflow

jax

dork list

Shark

build Shark , Relase

#WSL2   Ubuntu-20.04

apt info  libboost-all-dev   #Boost version (currently 1.71)
apt install -y libboost-all-dev

cd /machine_learning-home/
git clone git@gitcode:machine_learning/shark.git

cd /machine_learning-home/shark/ 
cmake -S . -B build
cd build;  make -j 8

#test:
/machine_learning-home/shark/build/bin/Statistics


build, Debug

#...
cmake -S . -B build_debug -DCMAKE_BUILD_TYPE=Debug
cd build_debug/ ; make -j 8
#...

Relase diff Debug

cd /machine_learning-home/shark/build;  tree -L 5 > /tmp/release.tree
cd /machine_learning-home/shark/build_debug;  tree -L 5 > /tmp/debug.tree

diff /tmp/release.tree /tmp/debug.tree
"""2839c2839
< │   └── libshark.a
---
> │   └── libshark_debug.a
"""

opennn

cmake -S . -B build_debug -DCMAKE_BUILD_TYPE=Debug -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON
cd build_debug ; make -j 8
ln -s /machine_learning-home/opennn/examples/iris_plant/data /machine_learning-home/opennn/build/examples/data
cd /machine_learning-home/opennn/build/examples/iris_plant/
#./iris_plant  #正常运行
frida -f ./iris_plant  --debug --pause
"""
ls=DebugSymbol.findFunctionsMatching("*").map(functionAddressK => DebugSymbol.fromAddress(functionAddressK));
//试着显示全部函数, 这估计挺慢的.
"""

bash-completion

bash-completion

#dependency:
sudo apt install pytest-benchmark
sudo ln -s     /usr/bin/pytest-3  /usr/bin/pytest
sudo apt install python3-pexpect


#build and install:
git clone git@gitcode:tmp/bash-completion.git
cd bash-completion
autoreconf -i
./configure
make
sudo make check
sudo make install
#sudo make uninstall#

#run:
echo """
# Use bash-completion, if available
[[ $PS1 && -f /usr/share/bash-completion/bash_completion ]] && \
    . /usr/share/bash-completion/bash_completion
""" >> ~/.bashrc

ref

#if upper install occur error: _comp_initialize : command not found, 
#fix:
cd bash-completion && sudo make uninstall && cd -
#then install bash-completion by apt-get
sudo apt-get install --reinstall bash-completion
source /etc/bash_completion

调试信息查看

调试信息格式举例: stabs, COFF, PE-COFF, OMF, IEEE-695 , DWARF

PDB符号浏览工具 : SymView, Pdbripper

Dwarf调试信息查看用 horsicq/XELFViewer

#可用 XELFViewer 查看 以下 elf可执行文件 中的 调试信息 (符号表)
#/machine_learning/shark/build_debug/bin/Statistics
#d:\instrmcpp\dork_simple\User.cpp  #将其编译成a.out:  g++ -g User.cpp
#/machine_learning/opennn/build_debug/examples/mnist/mnist
#看到的函数名 形如: 
# _ZNSt12_Vector_baseIjSaIjEE12_Vector_implD2Ev
# _ZN4UserD2Ev   ,  _ZN4UserC4Ev 

#作为对照, 用ida打开这些elf文件,  可以看到 以上 _ZN* 对应的 源码中的函数名

_ZN*D*Ev , _ZN*C*Ev

_ZN*D*Ev , _ZN*C*Ev

深度探索c++对象模型
C++ 虚表分析

tensorflow编译

tensorflow编译v2.11.0

1.构建工具bazel安装 (使用清华镜像)

参考:清华镜bazel-apt

sudo apt install apt-transport-https curl gnupg

#curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg --dearmor >bazel-archive-keyring.gpg
curl -fsSL https://gitee/pubz/misc/raw/36806b6978e730016b2cf3a89e73e55094088fd2/bazel-release.pub.gpg | gpg --dearmor >bazel-archive-keyring.gpg

sudo mv bazel-archive-keyring.gpg /usr/share/keyrings/
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/bazel-archive-keyring.gpg] https://mirrors.tuna.tsinghua.edu/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list

注意 https://bazel.build/bazel-release.pub.gpg 在gitee有镜像

如果报签名错误,重装证书:

sudo apt-get install --reinstall ca-certificates

#sudo apt update && sudo apt install bazel
sudo apt update && sudo apt install bazel-3.1.0

2. install build-essential and clang

sudo apt install build-essential  clang-14 -y

3. build tensorflow

conda create -n build-tensorflow python=3.8

conda activate build-tensorflow

#pip清华镜像
python -m pip install --upgrade pip
pip config set global.index-url https://pypi.tuna.tsinghua.edu/simple

pip install     pip numpy wheel
pip install     keras_preprocessing --no-deps

pip install packaging


cd /machine_learning; git clone git@gitcode:machine_learning/tensorflow.git
#git clone https://github/tensorflow/tensorflow.git
cd /machine_learning/tensorflow; git checkout v2.11.0

cd /machine_learning/tensorflow
./configure
"""
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: N
No CUDA support will be enabled for TensorFlow.
Do you wish to download a fresh release of clang? (Experimental) [y/N]: N
Clang will be downloaded and used to compile tensorflow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: -std=c++14  -Wno-error
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N
"""

bazel build  --subcommands //tensorflow/tools/pip_package:build_pip_package

#bazel build //tensorflow/tools/pip_package:build_pip_package

#cat -n  /home/z/.cache/bazel/_bazel_z/e167e5f1142e509062dbbcf47c207bce/external/com_google_absl/absl/synchronization/internal/graphcycles 






bazel debug build : --compilation_mode dbg
bazel release build : --compilation_mode fastbuild

–compilation_mode (fastbuild|opt|dbg) (-c)

#这里为还没试过的build命令
#bazel build  --subcommands --jobs 4  //tensorflow/tools/pip_package:build_pip_package  #"--jobs 4" 没试过 不确定是否对
#bazel build  --subcommands --compilation_mode dbg //tensorflow/tools/pip_package:build_pip_package 
#bazel --host_jvm_args "-DsocksProxyHost=10.11.1.115 -DsocksProxyPort=7890"  build  --subcommands --compilation_mode dbg //tensorflow/tools/pip_package:build_pip_package 
find  /machine_learning/tensorflow/ -type f  -not -path '*/\.git/*' | xargs -I% grep -Hn  io_bazel_rules_docker %

报错解决

external/ruy/ruy/block_map:375:25: error: ‘numeric_limits’ is not a member of ‘std’
cd ~/.cache/bazel/_bazel_z/e167e5f1142e509062dbbcf47c207bce/execroot/org_tensorflow/
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections  -MD -MF bazel-out/host/bin/external/ruy/ruy/_objs/block_map/block_map.d '-frandom-seed=bazel-out/host/bin/external/ruy/ruy/_objs/block_map/block_map.o' -iquote external/ruy -iquote bazel-out/host/bin/external/ruy -g0 -g0 '-std=c++14' -Wall -Wextra  -Wundef -O3 -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/ruy/ruy/block_map -o bazel-out/host/bin/external/ruy/ruy/_objs/block_map/block_map.o
external/ruy/ruy/block_map: In function ‘void ruy::MakeBlockMap(int, int, int, int, int, int, int, int, const ruy::CpuCacheParams&, ruy::BlockMap*)’:
external/ruy/ruy/block_map:375:25: error: ‘numeric_limits’ is not a member of ‘std’
  375 |   int best_score = std::numeric_limits<int>::min();
      |                         ^~~~~~~~~~~~~~
external/ruy/ruy/block_map:375:40: error: expected primary-expression before ‘int’
  375 |   int best_score = std::numeric_limits<int>::min();

解决 在对应的头文件 ~/.cache/bazel/_bazel_z/e167e5f1142e509062dbbcf47c207bce/execroot/org_tensorflow/external/ruy/ruy/block_map.h 中增加 “#include ” 即可

#include <limits>
ModuleNotFoundError: No module named ‘packaging’

报错:

from packaging import version as packaging_version  # pylint: disable=g-bad-import-order
#报错: ModuleNotFoundError: No module named 'packaging'


解决:

conda activate build-tensorflow
pip install packaging

wsl2 ubuntu20.04 @win10 编译 tensorflow v2.11.0 , 仅写差异部分

git checkout v2.11.0
sudo apt update && sudo apt install bazel-5.3.0
#家里的真机ubuntt22 用的貌似是 bazel-3.1.0 ?
dos2unix /mnt/d/machine_learning-home/tensorflow/configure
dos2unix /mnt/d/machine_learning-home/tensorflow/tensorflow/lite/experimental/acceleration/configuration/BUILD


cd /mnt/d/machine_learning-home/tensorflow/
./configure
bazel build --subcommands  //tensorflow/tools/pip_package:build_pip_package

pytorch编译

  1. 由于 pytorch v0.3.0 没有 torch/torch.h, 因此 c++ example无法直接编译 (也找不到适合v0.3.0的c++例子), 因此放弃v0.3.0,
  2. 试图转向v0.4.0, 但v0.4.0 中的 子模块NervanaSystems/nervanagpu.git在github中已经不存在了,因此放弃v0.4.0
  3. 试图转向下一个版本v1.0.0, 但v1.0.0 在python setup.py xxx时 依然报错:tools/shared/init.py中import路径有问题,这个错误在v0.3.0中有类似错误(v0.3.0中的已经修复过一次了,麻烦的很),所以再次转向v1.3.1

pytorch v0.3.0编译

v0.3.0 低版本 好编译

cd /machine_learning-home/
git clone git@gitcode:machine_learning/pytorch-group/pytorch.git
#/machine_learning-home/pytorch/.git/

cd /machine_learning-home/pytorch/
#git checkout v0.3.0
git checkout -b branch-v0.3.0

export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" # [anaconda root directory]
pip install numpy pyyaml mkl setuptools cmake cffi

export NO_CUDA=1

#export CMAKE_BUILD_TYPE=Debug#不奏效
export DEBUG=1 #奏效,  DEBUG写在setup.py中

dos2unix pytorch/torch/lib/build_libs.sh
#clone子模块: git submodule update --init
python setup.py install


#编译完后测试一下:
cd /tmp/  & python -c "import torch"

#注意编译完后不要在 目录 /machine_learning-home/pytorch/ 下运行测试语句 "import torch",  
#原因是 此时 "import torch" 正好错误的匹配到目录 /machine_learning-home/pytorch/torch/, 
#正常情况下,"import torch" 应该是匹配到 目录 /home/z/miniconda3/envs/build-pytorch/lib/python3.8/site-packages/torch



参考: pytorch/*/v0.3.0/README.md

其他

python setup.py --help-commands | grep install_lib


pytorch编译报错及解决

python setup.py install_lib 中的 py_compile 报 syntax error

错误描述

python setup.py install_lib
#py_compile byte-compiling 时 报错如下:
#syntax error:  /home/z/miniconda3/envs/build-pytorch/lib/python3.8/site-packages/torch/autograd/variable.py
#syntax error:  /home/z/miniconda3/envs/build-pytorch/lib/python3.8/site-packages/torch/autograd/_functions/tensor.py
#syntax error:  /home/z/miniconda3/envs/build-pytorch/lib/python3.8/site-packages/torch/cuda/comm.py

解决办法: 把 async 全部替换为 async_

find /machine_learning-home/pytorch -name "*.py" -a -type f | xargs -I% sh -c "grep -Hn  async % && sed -i 's/async/async_/g' %  "

解决过程:

python2.7  -c "import py_compile; py_compilepile('/home/z/miniconda3/envs/build-pytorch/lib/python3.8/site-packages/torch/autograd/_functions/tensor.py')"
#居然需要py2.7 就正常了
#async 不是 py2.x的关键字, 但 async 是 py3.x的关键字

pytorch 编译 放弃版本 v0.4.0,因为子模块NervanaSystems/nervanagpu.git已不存在

pytorch 编译 版本 v1.0.0 (同v1.3.1类似)

v1.0.0 在python setup.py xxx时 依然报错:tools/shared/init.py中import路径有问题,这个错误在v0.3.0中有类似错误(v0.3.0中的已经修复过一次了,麻烦的很),所以再次转向v1.3.1

pytorch 编译 版本 v1.3.1

pytorch v1.3.1 编译过程

cd /machine_learning-home/
git clone git@gitcode:machine_learning/pytorch-group/pytorch.git  pytorch.v1.3.1
cd /machine_learning-home/pytorch.v1.3.1/
cd pytorch.v1.3.1/ 
git switch -c branch-v1.3.1 origin/branch-v1.3.1   #或用lazygit切换到分支 branch-v1.3.1 

rm -fr .git/modules/third_party/*  third_party/*  .git/modules/android/libs/*  android/libs/*  
git submodule update --init --recursive



编译过程参考

conda create -n build-pytorch.v1.3.1 python=3.8
conda activate build-pytorch.v1.3.1

#pip清华镜像
python -m pip install --upgrade pip
pip config set global.index-url https://pypi.tuna.tsinghua.edu/simple

pip install  astunparse numpy ninja pyyaml setuptools cmake cffi typing_extensions future six requests dataclasses

pip install mkl mkl-include


export USE_CUDA=0
export USE_ROCM=0

#export CMAKE_BUILD_TYPE=Debug#不奏效
export DEBUG=1 #奏效,  DEBUG写在setup.py中

#dos2unix pytorch/torch/lib/build_libs.sh

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
#echo $CMAKE_PREFIX_PATH
#/home/z/miniconda3/envs/build-pytorch.v1.0.0



CMAKE_VERBOSE_MAKEFILE=True python setup.py develop

#直接用conda的base环境、或 直接用的系统自带python时 ,以下安装命令 可能会影响日常使用 不建议使用。  当用的conda专用环境、或在docker中,建议运行以下安装命令
python setup.py install

pytorch v1.3.1 example & frida

pip install frida==16.0.7 frida-tools==12.0.4
cd /machine_learning/pytorch.v1.3.1-py37/example/cpp_frontend_net/
sh build.sh
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/machine_learning/pytorch.v1.3.1-py37/torch/lib/  frida -f net --debug --pause

报错及解决

Device.cpp:218:1: error: cannot convert ‘std::nullptr_t’ to ‘Py_ssize_t’
1. 报错描述

pytorch/issues/29162
python pep-0590

2.1 报错解决办法1: 合入修复提交commit/86c64440c9169d94bffb58b523da1db00c896703,

该修复提交commit/86c64440c9169d94bffb58b523da1db00c896703 已经在分支origin/branch-v1.3.1-fix上了,因此 切到分支 origin/branch-v1.3.1-fix即可

git pull
git switch -c branch-v1.3.1-fix origin/branch-v1.3.1-fix

担忧的是 理论上应该想办法验证这个修改有没有问题,而解决办法2(退回py3.7)无此担忧.

2.2 报错解决办法2: 将python3.8退回到python3.7
conda create -n build-pytorch.v1.3.1-py37 python=3.7
conda activate build-pytorch.v1.3.1-py37

其他

web ssh client

webssh

pip install webssh
wssh --address='0.0.0.0' --port=8888
ip address #10.11.1.107
#browser access:  http://10.11.1.107:8888/

git text tui (git tui)

lazygit ,
tig,
gitui

mangle & demangle, dwarf : search _ZN(尝试找到规则?)


/mnt/d/gcc-home/gcc$ find . -type f -not -path '*/\.git/*' -a -type f  | xargs -I% grep -Hn "_ZN" % > ../search_ZN.txt

search_ZN.txt


./libstdc++-v3/testsuite/abi/demangle/regression/cw-06:46:  verify_demangle("_ZNKSt17__normal_iteratorIPK6optionSt6vectorIS0_SaIS0_EEEmiERKS6_",



./libstdc++-v3/config/abi/pre/gnu.ver:601:    _ZNSt7num_getI[cw]St19istreambuf_iteratorI[cw]St11char_traitsI[cw]EEE[CD][012]*;

demangle & mangle

github topics/demangle

demangle & mangle

demangle 即 解码, mangle 即 编码

demangle例子:

#demangle example:
#_Z23this_function_is_a_testi    ---->  this_function_is_a_test(int)

#_ZNSt22condition_variable_anyD2Ev  ---->  std::condition_variable_any::~condition_variable_any()

c++filt 也是个demangle工具

demangle : original

demangle in LLVM

demangle in GCC

demange : python

afq984/python-cxxfilt
benfred/py-cpp-demangle

demangle : javascript

arthurmco/demangler-js

demange : cpp

nico/demumble

本文标签: SharkopennnbashColabSSH