Making a 3D Scanning Booth
Hi and welcome to another instalment of “Ava get’s nothing done”!
First a disclaimer:
I’m not making a 3D Scanning Booth.
Well, I might be, but that’s not what I was looking to do when I started. What was I looking to do? Well actually I just wanted to purchase a 1/4 or 1/2 scale dress-form for mocking up my clothing designs on!
The thing is, they’re just as expensive as full size dress-forms, and more often than not, shaped for some imaginary ideal feminine body shape that I have zero chance of ever attaining.
No. What I need is a customised scale dress form that is based on my body!
Then it occurred to me…I should just build one (pretty much my answer to everything!).
Anyway, I think you get the idea. I run from one thought to another and rarely stand still. The fabrication of my own dress form made me think of 3D printing the parts, which of course led to 3D scanning myself, which led to build my own 3D scanner!
Will I ever actually get anything done? Who knows…but a quick search on the topic reveals it’s actually relatively quite straightforward.
The approximate steps are:
- Have a spare room to house the scanner.
- Build a sphere shaped frame out of some rigid material.
- Place a camera at each frame junction.
- Have each camera take a shot of the subject at the same time.
- Send the various results to a computer for assembly.
- Some amount of time later, a 3D model complete with real world texture and material information is ready!
Now most of the above steps are just logistics ultimately. Lot’s of suitable materials are available. Cheap cameras are available. Computers are good at controlling other electronic devices.
There are also some additional steps depending on how far you want to go on this.
- Put in uniform lighting.
- Put in contrast lighting.
- Put the lighting on a microsecond controllable timer.
- Take two pictures per frame. One with uniform lighting, one with contrast lighting.
- Assemble the images locally (i.e. near the cameras themselves using a wired connection) and build the the 3D model in realtime with baked in spherical harmonics for storing occlusion information.
- Stream the result to a client computer for realtime, interactive, user controlled re-lighting of the data for use in a game engine. Or something else!
Working with raspberry pis for the camera image capture and processing / transmission is probably the main limiting factor in the short term, but something that should improve with time, so I think modularity would be desirable so that they could get swapped out as things improve.
Also, the timing / ability to stream might be somewhat compromised…that’s something I don’t know right now, but I’m not sure how reliable or high quality their output is. If realtime streaming is out of the question in the short term, then the costing related stuff changes quite considerably. The only time it is necessary to have an array of n * m stationary cameras is if you’re going to do realtime streaming. Once that’s out of the window, it makes more sense to just store the results locally and transmit at your leisure, in which case, you’ve got plenty of time to move the camera around. An obvious solution would be to use a four quadrant rotating frame, where each camera has to be moved to cover it’s 90 degrees. In theory just two cameras (a single spinning ring) could cover 180 degrees each, but that would require it to spin very quickly for realtime work, assuming 30 FPS.
Of course a single camera would have to cover 360 degrees 30 times a second assuming you could live with the drift from each sample being delayed slightly.
If you could, then you could slow things down a lot by adding extra cameras, but it also complicates things. Using two cameras would mean each is responsible for capturing 180 degrees. Once they reach their destination, you would then flip their identities, so that conceptually the same area was being captured by the same camera. This means that 360 degrees would get captured twice as fast, meaning half the rotation speed would be required. So 900 RPM. Using four cameras would give 450 RPM. Using eight cameras would give 225 RPM. Eight is probably the sweet spot if you aren’t rotating at all, as each camera will be more than capable of capturing a 45 degrees of the view, but rotating would reduce hardware costs, so perhaps four cameras would be better. Obviously we’re only talking about the horizontal plane here, so another four would be needed 45 degrees below the horizontal plane, and another four 45 degrees above the horizontal plane with a final last camera looking straight down from the top. So:
13 cameras -> minimises construction…just two perpendicular rings required around the up axis, and four division around the horizontal plane, with the tips omitted from the count. Actually it might be better to avoid using the vertices (intersections) in the horizontal plane and instead plan to use centre of the polygon faces instead. Basically, 3 horizontal rings each covering a third of the 180 vertical degrees they share. If the sensors are typical aspect ratio, this would mean using them in portrait orientation.
So, 13 cameras, two rotating rings, with 3 sampling intervals along the vertical, rotating at 450 revolutions per minute, which seems high, but in reality is actually only 7.5 revolutions per second. Importantly that’s a low enough frequency to be inaudible to humans and microphones.
The rotating array would be ceiling mounted, with the cabling running through the frame, through the centre of the mounting axis, so that it doesn’t get affected by rotation. Of course wireless could be an option, but would some kind of doppler-like effect make it unreliable and even if it didn’t, the same wiring scheme would have to be used for the power supplies to the cameras anyway.
Costs rotating vs stationary:
Rotating:
- Material and engineering for two rotating rings.
- A servo motor to rotate the rings at 7.5 RPS.
- 13 cameras.
Stationary:
- Material and engineering for four stationary rings.
- 26 cameras.
Given the price vs complexity trade-off, I don’t rotation is a good idea at that price difference. The cameras would have to be prohibitively expensive to make it worth while (they’re not currently).
In which case, would one of the other rotating examples be a better trade-off?
- A single rotating ring at 15 RPS.
- 7 cameras.
- Still low enough frequency to not generate nasty sounds.
- If the servo motor is cheaper than 20 cameras, then it becomes worthwhile (each rasp pi + camera = around £40, so servo motor would need to be less than 40 * 20 = £800).
- Potentially quite portable.
I think for experimentation purposes, rotating might be quite fun, but in practice, it would be more sensible to just build the stationary and save money on the simper frame construction and lack of servo. The cameras would have to be twice as expensive to make rotation worthwhile.
I don’t know if I’d ever get around to it, but having one of those built in one of my unused rooms in my house would be pretty awesome and not just for helping me build a dress form.
Anyway, I guess it’s something to think about.
Resources
The first few hits that appeared on google were all pretty great and informative:
https://www.raspberrypi.org/blog/pi-3d-scanner-a-diy-body-scanner/
https://all3dp.com/buildbrighton-makerspace-builds-1000-full-body-3d-scanner-with-raspberry-pi/
https://www.instructables.com/id/3D-Body-Scanner-Using-Raspberry-Pi-Cameras/