admin管理员组

文章数量:1584617

python木马程序设计

总览 (Overview)

In this article, we will be building a python based trojan that does the following:

在本文中,我们将构建一个执行以下操作的基于python的木马:

  1. Download remotely unrelated code to the trojan and run it

    将与远程无关的代码下载到木马并运行
  2. Update code downloaded in (1)

    更新代码下载于(1)
  3. Update itself

    自我更新
  4. Rerun itself (immune to any signal except SIGKILL)

    重新运行自身(不受SIGKILL干扰)
  5. Acquire and transfer root privileges (and thus do about anything on the target machine)

    获取并转移root特权(从而在目标计算机上执行任何操作)
  6. Send data over HTTP to the attacker

    通过HTTP将数据发送给攻击者

And we begin with a simple assumption: the target executes some code that is beneficial to it. It might be anything like a python package serving to do some task the user thinks is worth doing.

我们从一个简单的假设开始:目标执行一些对其有利的代码。 它可能类似于python包,用于执行用户认为值得做的某些任务。

Source code: Github

源代码: Github

介绍 (Introduction)

Trojans are powerful because they look nice and are one of the foremost candidates of evading suspicion. Once run, they get about their malicious intent while looking perfectly fine to the attacker. More so, since targets (especially developers) are usually not suspicious of grabbing open-source/packages code and running it. It might be a good entry point for our exploit.

特洛伊木马之所以强大,是因为它们看起来不错,并且是逃避怀疑的首要候选者之一。 一旦运行,他们就可以了解恶意意图,同时对攻击者看起来还不错。 更重要的是,由于目标(尤其是开发人员) 通常并不怀疑获取开源代码/程序包代码并运行它。 对于我们的漏洞利用来说,这可能是一个很好的切入点。

好的代码 (The ‘good’ code)

The good code is simple. It does what the target intends it to do. It might range across a variety of things and span a whole package; the bigger the codebase, the subtler it is to spot activity. We’ll skip that part and write a simple code that prints something.

好的代码很简单。 它完成了目标打算执行的操作。 它可能涉及多种事物,并且涉及整个软件包。 代码库越大,发现活动就越微妙。 我们将跳过这一部分,并编写一个简单的代码来打印一些内容。

The good code with somewhat bad intents.
好的代码,意图有些不好。

To the target, this script should do what it is meant to (as in printing a simple line in our case) and exit peacefully. Apart from it, the main stuff here is the other stuff. The script builds a directory (in the normal case, you would want the working directory to be somewhere hidden. I’ll skip that for conciseness and obviousness of doing so) downloaded`, switches to it, and makes a cURL request (read more about cURL here) to some server at http://192.168.43.38:9000/downloader.py and downloads the content returned to a python script downloader.py . It then fires the command python3 downloader.py and peacefully exits. Since Popen was used, the child process (running downloader.py) disassociates from the parent (good.py)on the parent’s exit and associates with the init. So effectively, it becomes a separate process. The function run_command() is the python equivalent of a shell. It runs the specified command and returns the output from STDOUT or what you would have received had you used a shell.

对于目标,此脚本应按其意图(例如,在本例中为打印简单行)执行操作,然后和平退出。 除此之外,这里的主要内容是其他内容。 该脚本构建一个目录(通常情况下,您希望工作目录位于某个隐藏的位置。为简洁起见,我将跳过该目录),然后downloaded并切换到该目录并发出cURL请求(了解更多信息)。有关cURL的信息 )到位于http://192.168.43.38:9000/downloader.py某个服务器,然后将返回的内容下载到python脚本downloader.py 。 然后它触发命令python3 downloader.py并和平退出。 由于Popen使用,子进程(运行downloader.py )从父(解离good.py )在父母的出口和同伙与init 。 如此有效,它成为一个单独的过程。 函数run_command()与shell的python等价。 它运行指定的命令并返回STDOUT的输出,或者返回您使用Shell会收到的输出。

Now is the time to configure this http://192.168.43.38:9000.

现在是时候配置此http://192.168.43.38:9000.

服务器端 (Server end)

The idea is to build a server that automatically pushes code to the remote end. It will be later used to update code in real-time, transfer files, commands, and a lot more. All we need to do is to configure a HTTP server capable of handling POST and GET.

这个想法是建立一个自动将代码推送到远程端的服务器。 稍后将用于实时更新代码,传输文件,命令等。 我们需要做的就是配置一个能够处理POSTGET的HTTP服务器。

basic HTTP server skeleton in setup_server.py
setup_server.py中的基本HTTP服务器框架

A HTTPServer in python runs on two pieces of information: where to put it up and what to do on interaction. The former part is handled by ('192.168.43.38', 9000) which serves to bind the server to port 9000 of the machine and192.168.43.38 is the local IP. The latter part is handled by a separate class extending BaseHTTPRequestHandler that defines functionality for POST and GET. The _set_response() function serves to send mandatory HTTP header information (header, code 200 representing success, and end header representing the end of header). Since we shall be dealing with text data, setting the Content-type to text/html is fine. Now to add the two main functions.

python中的HTTPServer在两条信息上运行:放置在何处以及如何进行交互。 前一部分由('192.168.43.38', 9000)处理,该部分用于将服务器绑定到计算机的端口9000 ,而192.168.43.38是本地IP。 后一部分由单独的扩展BaseHTTPRequestHandler类处理,该类定义了POSTGET功能。 _set_response()函数用于发送强制性的HTTP标头信息(标头,表示成功的代码200和标头结尾的结束标头)。 由于我们将处理文本数据,因此将Content-type设置为text/html很好。 现在添加两个主要功能。

GET.py
GET.py

A GET request, amongst other things, contains a path to get. In this case, it would be of the form filepath in http://192.168.43.38:9000/filepath . It might be empty (analogous to the home page of a website you visit) or it contains something (analogous to further pages you visit from the home page). In our case, we wish to return the list of files available in the server when a GET is made to our home, i.e. http://192.168.43.38:9000/ and return the contents of the file when a specific file is requested, as in http://192.168.43.38:9000/downloader.py . The technique is to distinguish the two cases, open the file in the latter case, craft a response, and wfile.write() that sends the response.

除其他事项外, GET请求包含获取路径。 在这种情况下,其格式为http://192.168.43.38:9000/filepath中的filepath 。 这可能是空的(类似于你访问一个网站的主页 ),或者它包含的东西(类似于您从主页访问进一步页)。 在我们的情况下,我们希望在执行GET返回服务器中可用文件的列表,例如 http://192.168.43.38:9000/并在请求特定文件时返回文件的内容,就像http://192.168.43.38:9000/downloader.py 。 该技术是区分这两种情况,在后一种情况下打开文件, wfile.write()响应,以及发送响应的wfile.write()

A POST request contains data that must be handled.

POST请求包含必须处理的数据。

POST.py
POST.py

This gets the length from the HTTP header, extracts the data using rfile.read() , and stores to a file. The split using SPLIT is just a convenient way of handling newlines (for me! I am just never able to preserve newlines in data sent over a network. To a file, it looks like a huge single line data. To overcome this, I replace \n at the target end with a SPLIT which is then replaced back at the server end). Finally, we send a POST OK using wfile.write().

这从HTTP标头获取长度,使用rfile.read()提取数据,然后存储到文件中。 使用SPLIT拆分只是处理换行符的一种便捷方式(对我来说!我永远无法在通过网络发送的数据中保留换行符。对于文件,它看起来像是巨大的单行数据。要解决此问题,我替换了\n在目标端带有SPLIT ,然后在服务器端被替换回)。 最后,我们使用wfile.write().发送POST OK wfile.write().

Now we move on to crafting other stuff. First, downloader.py

现在我们继续制作其他东西。 首先, downloader.py

第一阶段执行 (Stage I execution)

Our good code serves to load a simple downloader.py to a specified directory and execute it. Now it is mainly up to downloader.py to handle everything else. As a starter, we want a way for downloader.py to handle these things:

我们的优质代码可用于将简单的downloader.py加载到指定目录并执行。 现在主要由downloader.py处理其他所有内容。 首先,我们希望downloader.py能够处理这些事情:

  1. Know when files at the server end have been updated.

    了解何时更新服务器端的文件。
  2. Know when code for downloader.py itself has been updated.

    知道何时downloader.py本身的代码已更新。

  3. Evade all signals possible (not even a simple kill PID should terminate it).

    避开所有可能的信号(甚至没有简单的终止kill PID都应终止它)。

  4. Download updated files from the server end and execute them.

    从服务器端下载更新的文件并执行它们。
  5. Hide itself (too large a subtopic and thus not dealt here).

    隐藏自身(太大的子主题,因此不在此处处理)。
  6. Schedule itself (related to cron and not covered here) and build kernel persistence.

    调度自身(与cron相关,在此不介绍)并构建内核持久性。

For the first two, the following code is sufficient.

对于前两个,下面的代码就足够了。

downloader.py
downloader.py

There might be several ways to check status. I settled with creating a special file status.txt having binary digits on two separate lines. The first line has a 0/1 that denotes whether the files on the server have been updated (or if the attacker wants to rerun certain programs), and the second line has a 0/1 that denotes whether downloader.py has itself to be updated. Quite intuitively, we download the status.txt and analyse the flags. If we obtain a 1 in the first line, we move on to fetching and downloading files (since updating downloader.py too involves the same operations, the second flag is not checked here). When data is retrieved, we delete status.txt since it is no longer needed.

可能有几种检查状态的方法。 我决定创建一个特殊文件status.txt该文件在两行中分别使用二进制数字。 第一行的0/1表示服务器上的文件是否已更新(或者攻击者是否想重新运行某些程序),第二行的0/1表示downloader.py是否本身已被更新。更新。 非常直观地,我们下载status.txt并分析标志。 如果在第一行中获得1,则继续获取和下载文件(由于更新downloader.py也涉及相同的操作,因此此处未选中第二个标志)。 检索数据后,由于不再需要status.txt ,因此我们将其删除。

The third requirement is dealt with simply like the following.

第三个要求可以像下面这样简单地处理。

Handling signals and termination
处理信号和终止

Any of the following signals received causes a re-downloading of the downloader.py and a re-run as a new process. Only SIGKILL which can’t be ignored kills the process.

收到以下任何信号都会导致downloader.py的重新downloader.py并作为新进程重新运行。 只有不容忽视的SIGKILL才能杀死该过程。

The fourth and the main part goes as such.

第四也是主要部分。

Fetching files
正在撷取档案

Recall from the discussion on GET that a request of the form http://192.168.43.38:9000 returns a list of files hosted in the server. We obtain that list and run a loop (ignoring status.txt as it has already been downloaded and analysed). Should we obtain downloader.py (i.e. it is available on the server) and data[1] == 1 (it needs to be updated), we fire refresh_downloader() that refreshes the downloader.py by re-downloading it, running a new process, and exiting the current process. If this is not the case, we simply create a list of files to be downloaded that are handled in the next snippet.

回想一下关于GET的讨论, http://192.168.43.38:9000格式的请求将返回服务器中托管的文件列表。 我们获取该列表并运行一个循环(忽略status.txt ,因为已经下载并分析了它)。 我们应该得到downloader.py (即它是在服务器上可用)和data[1] == 1 (它需要更新),我们火refresh_downloader()是刷新downloader.py通过重新下载它,跑新流程,并退出当前流程。 如果不是这种情况,我们只需创建要下载的文件列表,这些文件将在下一个代码段中处理。

Download files
下载档案

Simply create cURL requests to the respective URL and download the files. Once downloaded, run_scripts() runs all files in the working directory except downloader.py (that results in a fork bomb as long as status.txt has a 1 in its first line: the flag indicating to keep downloading and refreshing files) and setup_server.py (that contains the script of the server as discussed before). [ You shouldn’t keep setup_server.py on the server itself but then I was lazy enough not to fix that :)].

只需创建对相应URL的cURL请求并下载文件即可。 下载完成后, run_scripts()运行除了在工作目录中的所有文件downloader.py (即导致fork炸弹,只要status.txt在其第一行1:标志,指示继续下载和刷新文件)和setup_server.py (包含前面讨论的服务器脚本)。 [您不应该将setup_server.py保留在服务器本身上,但是那时我很懒惰,无法修复它:)]。

Here’s the complete code for downloader.py

这是downloader.py的完整代码

Complete downloader.py. The commented line ‘command_list.append(“sudo”) is added for privilege transfer later.
完整的downloader.py。 添加了注释行'command_list.append(“ sudo”),以便稍后进行特权转移。

That’s pretty much it!

差不多了!

样品运行 (Sample run)

Initial setup on the target side.
目标侧的初始设置。
Initial setup on the attacker side
攻击者侧的初始设置
Running good.py. Notice the ps -A output on the left.
运行good.py。 注意左侧的ps -A输出。
Notice ps -A output after good.py exits.
注意good.py退出后的ps -A输出。
downloader.py performing GET requests to the server for status.txt
downloader.py执行GET请求到服务器获取status.txt

status.txt till now is set as:

到目前为止, status.txt设置为:

A simple two-line text file having 0 and 0.
一个简单的两行文本文件,具有0和0。

Now update some part of the downloader.py. I added a simple print statement before the check_status() call in the while loop. Then set the status.txt as 1 and 1. Wait for GET request. Here’s the result:

现在更新downloader.py.某些部分downloader.py. 我在while循环的check_status()调用之前添加了一个简单的print语句。 然后将status.txt设置为1和1。等待GET请求。 结果如下:

As the status was changed to 1 and 1, notice on the left how GET requests were made for the downloader.py. Afterwards, normal requests for status.txt returned.
当状态更改为1和1时,请注意左侧如何对downloader.py进行GET请求。 之后,返回对status.txt的常规请求。
Printing
列印
Notice the change in PID of the downloader.py
注意downloader.py的PID更改

第二阶段执行 (Stage II execution)

Now that a general framework for pushing remote code is up and running, now is the time for writing few exploits. You can be creative with it. I wasn’t and came up with two ideas:

现在已经建立并运行了用于推送远程代码的通用框架,现在该编写一些漏洞利用了。 您可以用它来发挥创造力。 我不是,并且提出了两个想法:

  1. Running a set of commands

    运行一组命令
  2. Querying some directory (recursively reading all the files in the subdirectories and returning to the target)

    查询某个目录(以递归方式读取子目录中的所有文件并返回到目标)

We predefine commands.txt to be the standard for the attacker’s storage of commands need to be executed on the target machine. Likewise, directory.txt to be the set of directories to be queried.

我们预先定义了commands.txt作为攻击者需要在目标计算机上执行的命令存储标准。 同样, directory.txt是要查询的目录集。

We build the first exploit.

我们建立了第一个漏洞利用程序。

command_executor.py
command_executor.py

Quite straightforward implementation in opening commands.txt and reading commands line by line, executing them, crafting a response, and sending it back to the server.

在打开commands.txt并逐行读取命令,执行命令,编写响应并将其发送回服务器方面,这是非常简单的实现。

The second exploit goes as follows.

第二个漏洞利用如下。

directory_enlister.py
directory_enlister.py

This exploit opens the file directory.txt and reads the directories to target. It then fires enlist_directory() which lists separately the files and the subdirectories. For each file, it reads ( cat filename) and crafts a response. For each subdirectory, it adds the complete path to the main list directory_list which in turn makes sure a turn comes when this subdirectory appended is also queried for further files and subdirectories within it.

该漏洞利用打开文件directory.txt并读取目标目录。 然后,它触发enlist_directory() ,该列表分别列出文件和子目录。 对于每个文件,它都会读取( cat filename )并生成响应。 对于每个子目录,它会将完整路径添加到主列表directory_list ,这又可以确保在附加此子目录的同时也查询其中的其他文件和子目录时,该目录将出现。

That’s it! Craft commands.txt and directories.txt and go ahead execute them.

而已! 编写commands.txtdirectories.txt ,然后继续执行它们。

Sample (still quite sensitive information on the target)
样本(关于目标仍然非常敏感的信息)

样品运行 (Sample run)

While the server up and running, move both text files and both exploits to the server and change status.txt to 1 and 0 (implying update in files but downloader.py need not be updated, which in turn implies update and run other files which consequently runs our exploits).

在服务器启动并运行时,将两个文本文件和两个漏洞利用都移动到服务器,并将status.txt更改为1和0(这意味着更新文件,但无需更新downloader.py ,这又意味着更新并运行其他文件)因此运行我们的漏洞利用)。

Based on the target’s connectivity as well as on the time.sleep(30) in the second exploit, it takes a while to transfer data. But once it does, you realise what you just obtained.

根据目标的连接以及第二次利用中的time.sleep(30) ,传输数据需要一段时间。 但是一旦完成,您就会意识到自己刚刚获得了什么。

Note for status change how GET requests are made
有关状态更改的说明,如何进行GET请求

Several files were downloaded. I moved them to different directories and analysed them. Here’s a snapshot.

已下载了几个文件。 我将它们移到了不同​​的目录并进行了分析。 这是快照。

‘Contents of’ was added by directory_enlister.py when it read contents of files and stored them to send back to the server. These are the list of files having data in respective *.txt files.
当目录_enlister.py读取文件内容并将其存储并发送回服务器时,它由directory_enlister.py添加了。 这些是在各个* .txt文件中具有数据的文件列表。
A sample. Contents of .bash_profile, ifconfig, as well as a compressed gz file
一个样品。 .bash_profile,ifconfig和压缩的gz文件的内容
Some files whose data is POSTed to the server
一些数据已过帐到服务器的文件
Files related to Mailbox of the target
与目标邮箱相关的文件
Some mail data. Note non-encrypted mail data may have much information about the target.
一些邮件数据。 请注意,未加密的邮件数据可能包含有关目标的很多信息。

Here’s some browsing data.

这里是一些浏览数据。

Top sites on Safari
Safari上的热门网站
Recently closed tabs
最近关闭的标签页
List of Safari files available for analysis
可供分析的Safari文件列表

特权转移 (Privilege transfer)

The most interesting tasks require root access. While there are complex mechanisms for privilege escalation, we have a slight advantage here: child of a sudo process has root access.

最有趣的任务需要root访问。 尽管特权提升有复杂的机制,但我们在这里有一点优势: sudo进程的子级具有root访问权限。

All you need is to convince the target to run good.py as root. This should not be difficult; users are now and then giving root access to code that just doesn’t run without root access (like any Scapy code).

您所需good.py就是说服目标用户以root身份运行good.py 这应该不难; 用户现在会授予root权限以访问那些没有root访问权限就无法运行的代码(例如任何Scapy代码)。

Let’s build a simple sniffer that won’t run without root access. Before anything further, change all COMMAND from python3 filename.py to sudo python3 filename.py anywhere Popen() comes up (also in the good.py when it Popen the downloader).

让我们构建一个简单的嗅探器,该嗅探器必须具有root用户访问权限才能运行。 任何进一步之前,更改所有COMMANDpython3 filename.pysudo python3 filename.py任何地方Popen()来了(也是在good.pyPopen下载器)。

sniffer.py
嗅探器

We will upload this through the server, it gets downloaded. Now let the target run good.py . It prompts from root access. Since the code has meaning and utility to the target, the target will most probably give root access. Now begins your privilege transfer where root access gets transferred to all children. We shall verify this if the sniffer runs.

我们将通过服务器上传此文件,然后将其下载。 现在让目标运行good.py 它从根访问提示。 由于代码对目标具有意义和实用性,因此目标很可能会授予root访问权限。 现在开始您的特权转移,其中root用户访问权将转移到所有子项。 如果嗅探器运行,我们将对此进行验证。

Scapy sniffer runs without further prompting
Scapy嗅探器无需进一步提示即可运行

Scapy requires root access, and here we just ran a scapy program from root access given to good.py . It transferred from good.py to downloader.py which transferred to all the child process it spawned. Therefore, the sniffer was able to run just fine. Moreover, terminating processes also requires root permissions.

Scapy需要root用户访问权限,在这里我们只是从root good.pygood.py运行了一个Scapy程序。 它从good.py转移到downloader.py ,后者又转移到它产生的所有子进程。 因此,嗅探器能够正常运行。 此外,终止进程还需要root权限。

Root access to terminate directory_enlister.py process
根目录访问以终止directory_enlister.py进程

结论 (Conclusion)

The only thing left is to build functionality to self-hide. It is a broad topic and studying rootkits might be a good way to start thinking about that.

剩下的唯一事情就是构建自我隐藏的功能。 这是一个广泛的话题,研究rootkit可能是开始考虑这一问题的好方法。

Have a good day!

祝你有美好的一天!

翻译自: https://medium/bugbountywriteup/python-http-based-trojan-for-remote-system-forensics-and-privilege-transfer-ae128891b4de

python木马程序设计

本文标签: 特洛伊特权程序设计木马木马程序