Prize Pics banner

Prize Pics

14 devlogs
26h 18m 41s

Kindle for your photos app.

This is an e-ink picture frame. Similar products on the market cost upwards of $200, this only costs $50 and is open source. 2 main parts: hardware and software. The hardware's main job is to display the images on t…

Kindle for your photos app.

This is an e-ink picture frame. Similar products on the market cost upwards of $200, this only costs $50 and is open source. 2 main parts: hardware and software. The hardware’s main job is to display the images on the queue. The software’s job is to make sure the user can uplaod images, can view the queue, live preview what is on the e-ink, and skip the image if they want.

Full description on readme.

Demo Repository

Loading README...

Arya

Last devlog! In this, I made my demo video and cleaned up documentation. The main goal of this sprint was to make sure that this project is 1) easy to understnad and 2) easy to reproduce. Because this has both hardware and software components, I had to be mindful to clearly explain how to make the hardware. There is also some service files that need to be created, so I created documentation that outlines what service files are and how to create them to allow for the functionality in prize-pics. All these changes can be seen in the documentation directory in the github.

Attachment
Attachment
Attachment
0
Arya

In the past 3 hours, I worked on the uploader UI. The core functionality implemented is that 1) the current image on display is previewed 2) this image can be skipped 3) new images can be uploaded and 4) images can be deleted. I will now work on making the UI a bit better designed, but eh. I’ll probably make a mockup that y’all can play with (without the hardware, ofc).

Attachment
Attachment
0
Arya

I got a basic working UI, am working on making it look better now. #IHateWebDev

Attachment
0
Arya

In this time period, I worked on making the main file for Prize Pics! What this file does is merges all the classes I worked on previously to make it all run coherently. This allows the minimal viability of this project – the display will for sure display any image in the queue. The next stage of development is making uploading the images easier. I am thinking about making a localhost server.

Attachment
Attachment
0
Arya

worked on implementing the full read -> process -> cache pipeline. This is the final thing I needed to implement before implementing a webpage for this. All the python functions I made now allows for the ability to 1) read an image from a directory 2) process it 3) display it, given the proper assembling of function calls. Super excited to build now. This test_trasnfer isn’t big itself, but I’m excited to see how it can implement with teh other functions to provide the overall functionality >:) (if i’m rambling this is because I want to finish this project tonight haha)

Attachment
0
Arya

In this, I developed teh slideshow algorithm! One of the main thigns I wanted to make sure that I could implement is that there would be a scrollign motion. What fun would it be if only one image is displayed? The major role of this class is to create a slideshow object that scans teh queue directory and itterates through teh images. This will allow us in teh future, then, to call the transform algorithsm on tehse images, and then the display algorithm on teh transformed images. I missed objected oriented programing (i did a project in C before this 😭)

Attachment
Attachment
0
Arya

I created the pipline script! I found some errors in my code that I fixed, it priamrily being spelling and naming conventions. However, the end result is that it did work, with me being able to put a test.png file in a queue directory and then it being processed and displayed on the e-ink. This is a huge huge huge milestone as its going to allow me abstract an algorithm that is in charge of just sorting the images, and this will take care of displaying em. I want the pictures to be shuffled, so I think ill implement a shuffle/slideshow next. Super excited this worked, though.

Attachment
Attachment
1

Comments

vitorgabrielventura.0

that’s looks good

Arya

In this session, I did the bulk of this project through creating the image developer. While my previous class allowed a 300x400 b/w image to be displayed, this algorithm creates that image. It does so with 2 main algoirhtms: resize_maintain_aspect and apply_dither. First, the user submits an image to which we resize. There might be a +- 1 px border because of floating poitn errors. Then, we apply contrast, brightness, and sharpness enhancements. After, we convert it to a greyscale image and apply the dithering algorithm. One thing I noticed is that there is no function for atkinson dithering in the Image class, so I created my own. More documentation on how it works is in the code comments. Overall, this was a really productive session: now by implementing a pipeline script, the user can submit an image -> this class is applied to output a 400 x 300 b/w image -> display_controller will display it. Very exciting.

Attachment
Attachment
0
Arya

I wrote a wrapper class to interact wiht the readme! I noticed a lot of functions to display on the e-ink had more stuff attached to it (i.e. time.sleep) and had more for error handeling. To fix these, I wrote a class that makes initializing, drawing, and clearing a lot more straight forward. This also makes sure that correct inputs are inputed, and raises errors if a user enters something not right (i.e. a not 400x300 image). After making it, I wrote a test script just to make sure it works, and it did! As always, I documented the tester in the tests_readme.md to make sure anyone who wants to itterate on this code has documentation avalible.

Attachment
Attachment
0
Arya

Created a test script for button functionality. I realized my board had a button I used as a shutter, which I could use to scroll between images. Before I implement that, thouhg, I want to make sure it actually works! The button is between gpio 18 and ground. To create this script, I enabled a pull up resistor between gpio 18, so that whenever the button is pressed, the pin reads low and not high. This allows for a while(true) script that whenver teh button is pressed, the pin reads low, and then the terminal id displayed with “button pressed” and a count. Afterwards, I made sure to document htis function in the test readme.

Attachment
Attachment
0
Arya

I created a lightweight e-ink test script! I noticed that the waveshare one, while it works, is a lot. Because of that, I created a script to test the display.The main things it tests for are 1) clearing 2) displaying PIL objects 3) putting it to sleep. After I wrote it and tested it on my end, I wrote documentation on how to use it so that if a user needs to test thier hardware, they can just use the repo and not have to search outside.

Attachment
Attachment
Attachment
0
Arya

Added a requirements.txt for easier install of all dependencies. This will make sure that the end user will be able to install everythign quickly with one CLI command. I belive I have everything ready to start making some scripts >:)

Attachment
0
Arya

Found a way to clone my repo onto my rpi and cloned the waveshare libraries into my library folder! This allows me to sync my code between my RPi and mac so I can code on my mac, and then test the code using SSH. I should probably makee a ssh_instructions.md haha, that is a later problem. For now, I have the libraries there so I can start writying the basic scripts to transform images into something I can display.

Attachment
0
Arya

In this part, I first started planning what I want this project to acomplish: display images on an e-ink from my phone. I find that having a planning.md file really helps me clear my head and focus on specific problems. After I jotted some ideas there, I started writing the firmware and hardware isntructions. These are nowhere near done (as I have not done much fo the coding yet!) but I have a basic idea of what the user needs to get started with. For isntance, I have a devboard for an e-ink device from an earlier project which I am going to use for my demo, and I outlined the parts needed to build that. However, becasue that uses an RPi 2ZW, which is slow, I am prototyping on my RPi 3b+ with an e-ink. Because I am prototyping on a new e-ink, I made sure that it was not a lemon through waveshare’s test protocols. I then found some errors in their documentation, so I made a comprehensive test instruction set under firmware_instructions.md. My next step is to clone my repo onto my RPi and start working on the components I’ll need (image processor, slideshow, ect…) and testing em out. I’m glad I set a solid foundation of what I need to work on in this block, so my following blocks are more productive :)

Attachment
0