Forum Groups
  All forums
    Help & Feedback
      Work in progress
      Finished Art
      Non-Max related

Maxunderground news unavailable

average flattening of an .stl (or pointcloud) to roughly conform to a plane
show user profile  TheShrike
Hi everyone, I am here asking this because of all the forums I go to, this one seems to have the most knowledgeable and versatile user base on way more than just MAX.

Soooo.. Here is my question.

I work with pointclouds of scanned surfaces, which I convert to meshes in the end. Doesn't really matter which. The basic idea is to rip textures off real life things like cement or woodgrains or whatever really, and get that scanned data converted to a heightmap. The problem is that it's rare we get a scan of something in the real world that is relatively flat like it needs to be in a heightmap. So I have several methods for flattening these pointclouds enough that I can get a uniformly flat heightmap, while still retaining the micro details of the texture, but none of my methods are quick or that reliable.

SO to reiterate, I need a way to flatten within a certain tolerance, so it conforms the general form of the scan to a plane, with minimal loss of texture detail.

Any advice on this would be extremely awesome.
read 452 times
5/29/2015 1:56:48 PM (last edit: 5/29/2015 1:56:48 PM)
show user profile  TheShrike
I also want to explain, in a separate message my current method to do this. First, I convert Pointcloud to STL, and import it into Zbrush.

I take the curved, super high density mesh and decimate it, then use that lo density mesh to project the details from the hi to the lo. I do this level by level of subdivision, so in the end I have a 7 or 8 level mesh, with the highest level accurately representing the detail of the original scan. I auto UV the lowest SubD, then I flatten it entirely. When I go back up to the highest level, the detail is still there so all is pretty well at this point. THis is where I run into some issues. I try to make a displacement map out of Zbrush but am limited by the 8192x8192 texture output. I need textures much larger than that, that in the end, need to be 1270 DPI. I can convert the DPI in Photoshop but obviously I need to start with pretty huge textures to not lose any quality.

So... what I do as a workaround from here is export my Hi Poly and lo Poly and try to use Xnormal, since it can export larger textures. I just have not had as much luck in Xnormal getting nice heightmaps. Even with 4x anti-aliasing and big bucket sizes, I get a lot of blocky, lame inconsistencies. It looks way different than my ZBrush disp. maps. I have tried using cages on the lo poly to get a bigger projection range, but I am not happy with the results still, just lots of garbagy projection going on. Using the tonemapper doesnt help either.

ANYWAYS... I am not wild about this whole method anyways, because sometimes scan data is so huge that even ZBrush cant handle it. What I really want is an algorithm that takes non flat point clouds, projects them to a plane with a certain tolerance minimum so as not to lose the fine detail, maybe have some sliders to adjust levels and just spit out an image without having to visualize the giant file. Does this exist, or does someone have a better idea? Again, any help would be awesome.
read 447 times
5/29/2015 2:33:46 PM (last edit: 5/29/2015 2:33:46 PM)
show user profile  Garp
A idea would be to create a low res model of the surface onto which you would bake the height map from the hi res.
A fairly simple way would be to have cube used to defined a point by averaging the positions of the points within it. Then, to get some overlapping, you'd move the cube in one direction of one third of its size and compute a new position. You would scan the entire point cloud this way, in effect creating a second point cloud at a lower res. You'd create your low res surface by meshing the low res cloud with the same algorithm used to mesh the hi res one. You'd then create UVs for the low res by flattening/relaxing it. Then bake.
The trade off between hi res and low res details is controlled by the cube's size.
It's akin in a way to image filtering. I wouldn't be surprised if you could use some existing OpenGL feature to perform it, since OpenGL supports 3D textures, creating a lower res version of the 'texture' using larger texels. In any case, the whole process would be very easy to parallelize.

edit: I didn't see your second post.
You might want to browse graphics libraries like CGAL. They probably have something.

read 444 times
5/29/2015 2:43:24 PM (last edit: 5/29/2015 2:46:49 PM)
show user profile  K-tonne
if you could get a very low poly version of the scan- like under 10 polys low- then you could theoretically project the low poly onto a flat plane and the high frequency details in the higher levels should be maintained to some degree
use the projection brush as a last resort

Website and Portfolio

read 413 times
6/1/2015 4:41:42 PM (last edit: 6/1/2015 4:41:42 PM)
show user profile  Nik Clark
MeshLab is great at creating hi-res textures from pointclouds/photos/scans, and can do mesh decimation without affecting the textures. Don't know if it will work for what you are doing though.

Click here to send me an emailClick here to visit my websiteClick here to visit my photo gallery on Flickr

read 409 times
6/1/2015 4:55:39 PM (last edit: 6/1/2015 4:55:39 PM)
show user profile  ccampbell
You need Geomagics.

Remove errant noise with finite control while keeping point cloud color fidelity either selectively or globaly to a point cloud or mesh.

$Entrepreneur = if((Designer + Engineer)*Programmer){Problem Solver};

read 394 times
6/2/2015 12:12:05 AM (last edit: 6/2/2015 12:18:57 AM)
show user profile  TheShrike
All great advice... THanks everyone, you are the best!
read 373 times
6/2/2015 1:06:12 PM (last edit: 6/2/2015 1:06:12 PM)
#Maxforums IRC
Open chat window