Aside from the geometry nodes suggestion (which is a good one), have you looked into QGIS?
Yes, in coming to Blender from GIS end of things. I'm aware about most of the GIS tools and software and couldn't find anything relevant that would help me solve this problem. Thanks for the suggestion nonetheless.
@DontNoodles As I understand it, you want to:
-use OSM
-to import GIS into Blender
-with the BlenderGIS add-on, I take it?
-then use color info from that
-to drive selections
-for making buildings in white areas, populating green areas with trees, populating blue areas with water, etc?
I'm with you till the third bullet point. Thereafter, I'm not planning to use the color info from the OSM/GIS tool. Instead, what I'm suggesting is:
- Create buildings with each floor as a separate blender object, white in color. Even a simple cuboid will do.
- Create park/lake as other objects. Only this whole object will serve as a colored light source in an otherwise totally dark scene and illuminate parts of buildings where light can reach without getting occluded. My assumption is that these portions of the buildings which are now lit up are the places from where you can see the light (park/lake for us).
- The challenge is to identify and list the IDs of these building objects which are now lit up.
I hope this rewording makes my query more understandable. English is not my first language.
Not a Blender user myself, but these answers are general CG concepts. What you want is lighting baking. You can cook the lighting down to a UV-mapped texture. Then you bake out the IDs on the same UV coordinates and you only have to compare the two.
The other method that I have employed for a similar question using Houdini is a directional variant of Ambient Occlusion.
- Scatter points on your buildings
- Scatter points on your lakes/parks
- Loop over the points on your building and from each attempt an intersection test with a ray pointing to each point on a park/lake
- If it works, increment a counter, if not, don't.
- Find the average number of rays that are able to see the feature.
- Store the average on the points.
You can do this for each method to see how much of each feature is visible to each part of the buildings, then just store the floor number on each point as well, and bada boom, you have your mapping. Just have a shader sample the point values and bake it out to a texture.
There are many advantages of an AO model over trying to use Light Baking to get the same info. Primarily, speed. You don't need nearly as many sample points on either end to calculate it. Secondly, you get much more control over the details and can extract statistical information about the visibility. You can sample values on each sample point that can be aggregated and interrogated while it is doing all of the AO calculations. I can go on about this, but it would likely only become relevant once you saw how it worked. As a for instance, you can place indexes on points in a park to represent points of interest, like a fountain or gazebo, and then as the visibility is being sampled you add the index to a set if the ray is successful. Boom, now you also have a mapping of not only how much of the park that spot can see, but also which points of interest it can see with essentially 0 increase to calculation time. To do the same with lighting baking you would need to do a separate render. Also, for lighting you have to worry about falloff on the light, so it becomes difficult to use over a certain distance.
I totally get your point as to how it may be faster. Let me read up about ambient occlusion and since I've never worked with Houdini, whether I can implement the same using any of the tools that I'm conversant with. Thank you for making me aware of this.
Pretty sure Blender has a Python API. And AO research on its own probably won't yield much juice for what I described. The topic of limiting the scope of AO calculations to calculate other things is kinda not a thing. I actually used it as the subject of my Master's thesis. I was using it to calculate the exposure of scenes to the solar ecliptic throughout the year so I could calculate fading from direct sun exposure on textures. That is why I shared a step-by-step instead of a link. The principle for what you want to do is the same though. Measuring the exposure of geometry against some other geometry.
From what I just read, you want to use the Scene.ray_cast() function. Usage should be straightforward.
For point in buildingPointCloud:
TotalHit=0
For destination in parkPointCloud:
hit, _, _, _, _, _ = Scene.ray_cast(depsgraph, point, destination-point, length(point-destination))
TotalHit += hit
Visibility = TotalHit/len(parkPointCloud)
FunctionToStoreVisibilityOnPointOrInList(Visibility)
That should be enough to get you started. I am 99% unfamiliar with the Blender Python API, but this should get you there if you are even remotely experienced with it. There are obviously optimizations that can be done as this is very brute force, I was just trying to illustrate the basic loop.
Thank you, I'll explore along these lines.
I would be very curious to know what you come up with.
Currently, I'm trying to find a lazy (for me) way out. I'm learning to bake the lighting on objects and figuring out ways to iteratively do it for all objects of choice (buildings) in a scene automatically. Thereafter, i hope to do image processing on the unwrapped light baking maps to detect the desired colors. It should be possible to crop these images to detect light on individual faces and or find the percentage of area exposed too.
If this does not work out, I'll take the progressively difficult ways suggested in the thread as I learn and become comfortable with what you guys have kindly given me pointers for.
Blender
A community for users of the awesome, open source, free, animation, modeling, procedural generating, sculpting, texturing, compositing, and rendering software; Blender.
Rules:
- Be nice
- Constructive Criticism only
- If a render is photo realistic, please provide a wireframe or clay render