Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Circle rendering is very conservative #295

Closed
Timmmm opened this issue Sep 28, 2017 · 71 comments · Fixed by #626
Closed

Circle rendering is very conservative #295

Timmmm opened this issue Sep 28, 2017 · 71 comments · Fixed by #626

Comments

@Timmmm
Copy link
Contributor

Timmmm commented Sep 28, 2017

SolveSpace version: 2.3
Operating system: MacOS 10.13

When sketching a circle it seems like the rendering accuracy (i.e. how many line segments it is split into) depends on the physical size of the circle, rather than its on-screen size. The result is if you draw a small circle and zoom in it looks like this:

screen shot 2017-09-28 at 15 50 09

I.e. not very circular. Nobody is running this on a 386 so I would suggest as a simple hacky fix just increase the minimum number of segments from 8 to something a bit more reasonable like 128.

@whitequark
Copy link
Contributor

Imagine you're extruding this circle. Now, the entity (the flat circle) has 128 edges, but the solid mesh (the spatial cylinder) has 8 edges.

No way to rectify this inconsistency.

cc @Evil-Spirit @jwesthues this seems to be a really common complaint, any ideas to at least ameliorate it?

@Timmmm
Copy link
Contributor Author

Timmmm commented Sep 28, 2017

Hmm so I had a look in the source code and discovered the chord tolerance setting, which obviously helps. Reducing it from 0.5% to 0.01% basically fixes this for circles down to 1mm diameter, which is probably about the limit that I imagine most people would care about.

To be honest I would have expected the code to work by representing all the geometry as true curves, and then turning them into polygons for display. In which case it should be easy (well "easy") to set the chord accuracy dynamically based on how big the object is on screen. Or in other words, the chord accuracy should be measured in screen-space pixels, not world-space mm.

You know more than me though so maybe it isn't that simple. :-)

I just checked how Blender handles this. It seems like it uses a fixed number of divisions of each bezier irrespective of their size. Also not ideal.

@Timmmm
Copy link
Contributor Author

Timmmm commented Sep 28, 2017

I guess this is the relevant code?

void SBezier::MakePwlInto(List<Vector> *l, double chordTol) const {
    if(EXACT(chordTol == 0)) {
        // Use the default chord tolerance.
        chordTol = SS.ChordTolMm();
    }
    l->Add(&(ctrl[0]));
    if(deg == 1) {
        l->Add(&(ctrl[1]));
    } else {
        // Never do fewer than one intermediate point; people seem to get
        // unhappy when their circles turn into squares, but maybe less
        // unhappy with octagons.
        MakePwlInitialWorker(l, 0.0, 0.5, chordTol);
        MakePwlInitialWorker(l, 0.5, 1.0, chordTol);
    }
}

Maybe as a not-ideal fix it could just be changed so that the minimum number of segments is more than 2.

@whitequark
Copy link
Contributor

To be honest I would have expected the code to work by representing all the geometry as true curves, and then turning them into polygons for display.

That's exactly what we do.

In which case it should be easy (well "easy") to set the chord accuracy dynamically based on how big the object is on screen. Or in other words, the chord accuracy should be measured in screen-space pixels, not world-space mm.

Sure, but regenerating the entire mesh for a large project can take seconds, if not minutes. Do you want this on every zoom?

@jwesthues
Copy link
Member

Ideas:

  1. Adjust default parameters until you have an equal number of complaints about speed vs. too few pwl segments.
    • But I think you get more about speed already?
  2. Just make everything faster, duh.

You're obviously doing lots of little things that help (2). I don't see any one easy big thing, or else we'd have done it already...

@Timmmm This is the same tolerance used for the NURBS Boolean operations, so the computation isn't trivial. We could make the tolerances different for entities vs. the mesh, but then e.g. entities wouldn't line up with the edges of the mesh they were extruded to form.

@Evil-Spirit
Copy link
Collaborator

Evil-Spirit commented Sep 28, 2017

I don't think if we should do something with this.
On the one side is the precision of visualization, on the other is performance. We can increase minimum segments to 16, but what if we need to create a big array of cylindric shapes e.g. chip radiator. Then this can never be as simple as it can be with 8 segments. The other thing we can do - is use relative(fraction) tolerance from each bezier primitive bound, not from bound of whole model.

@whitequark
Copy link
Contributor

The other thing we can do - is use relative(fraction) tolerance from each bezier primitive bound, not from bound of whole model.

This is worth exploring, I think.

@Timmmm
Copy link
Contributor Author

Timmmm commented Sep 29, 2017

How does other CAD software handle this? I don't recall Solidworks or NX ever getting close to an octagon, even for 1mm circles.

(Sorry if it sounds like I'm complaining btw - this is by far the best open source CAD software in existence, and better in a few ways than the big commercial ones.)

@whitequark
Copy link
Contributor

whitequark commented Sep 29, 2017

I think they indeed calculate tolerance relative to primitive bound. The more I think about it, the more I become convinced that this is the only reasonable option. It might even replace the absolute/export tolerance (or at least both tolerances will be relative values).

@jwesthues, any opinion?

@jwesthues
Copy link
Member

Commercial CAD systems generally do a fixed chord tolerance, either absolute (like for export) or as a function of model size and/or zoom level and stuff. They presumably have lots of special case heuristics on top of that. If you've never seen the PWLs there, then I'd guess it's because they're just faster, making better use of the GPU, selectively re-meshing, etc. So they can set a finer tolerance without getting too slow. All of that could be done in SolveSpace, but it's a lot of work.

What's the rationale behind tolerance as a fraction of primitive bound? Like, if I represent the exact same geometric curve as one Bezier piece vs. two, why do I want finer PWLs for the two case? A single chord tolerance for the entire model is at least internally consistent. Note that the Booleans also use a chord tolerance, and rely on that geometric consistency.

@whitequark
Copy link
Contributor

Ack.

This seems a somewhat frequent UI papercut. What do we do here? Teach it better? Add some sort of heuristic that acts when you place the first sketch?

@jwesthues
Copy link
Member

Add a "chord angle" tolerance, on the maximum angle between the curve and the chord approximating it? So if you want big things meshed finer, then you make the chord distance tolerance smaller, but if you want little things meshed finer then you make the angle finer. That's how SolidWorks export does it, for example.

@whitequark
Copy link
Contributor

That seems very elegant.

@jwesthues
Copy link
Member

And easy to implement--just modify MakePwlWorker to compute some tangents and check the angle in the same place it checks distance. To make the Booleans consider angle tolerance would be a bigger project; but that's relevant only in cases where the chord distance tolerance is so big that the Booleans are failing entirely, and you can always fix that by dropping the chord distance tolerance, same as now.

@whitequark whitequark added this to the 3.0 milestone Jul 12, 2018
@jolshefsky
Copy link

For what it's worth, this also occurs during image export which is highly undesirable. My specific instance is to make a lineart isometric diagram of a small part; all the circles render as octagons even when >10% the size of the whole page.

@baryluk
Copy link

baryluk commented Jan 16, 2020

Hi,

this is always a first thing I change in the freshly compiled, first time run solvespace. Then I always need to play with the max segments and tolerances. Relative tolerances would be way better I think.

I usually change my segments settings to 32 or 100, and only back it down to lower number if performance suffers, which is almost never.

8 is really super low.

@Timmmm
Copy link
Contributor Author

Timmmm commented Mar 28, 2020

This is by far my biggest issue with Solvespace at the moment, and Solvespace is by far the best open source CAD software, so I had a look at the code to understand this more. This is on master as of a week or so ago.

How it currently works

  1. Initial default settings are loaded in solvespace.cpp:
    // Chord tolerance
    chordTol = settings->ThawFloat("ChordTolerancePct", 0.5);
    // Max pwl segments to generate
    maxSegments = settings->ThawInt("MaxSegments", 10);
    // Chord tolerance
    exportChordTol = settings->ThawFloat("ExportChordTolerance", 0.1);
    // Max pwl segments to generate
    exportMaxSegments = settings->ThawInt("ExportMaxSegments", 64);

(By the way is there a GUI for these settings? I can't see a way to edit them...)

  1. When generating PWL (piecewise-linear) curves from beziers and circles (which are actually just 4 beziers), it calls SolveSpaceUI::GenerateAll() in generate.cpp. It calls it twice - first with genForBBox = true to get the bounding box for the entire scene, and then with genForBBox = false to generate the actual PWLs.

  2. The second time it is called, it runs this code:

    // If we're generating entities for display, first we need to find
    // the bounding box to turn relative chord tolerance to absolute.
    if(!SS.exportMode && !genForBBox) {
        GenerateAll(type, andFindFree, /*genForBBox=*/true);
        BBox box = SK.CalculateEntityBBox(/*includeInvisibles=*/true);
        Vector size = box.maxp.Minus(box.minp);
        double maxSize = std::max({ size.x, size.y, size.z });
        chordTolCalculated = maxSize * chordTol / 100.0;
    }

So chordTol is the chord tolerance as a percentage of the bounding box size, and chordTolCalculated is the absolute chord tolerance in mm. This has the strange effect of changing the tolerance of some entities depending on what else is in the scene. For example, small circle inside a large circle:

image

Same small circle but with the large circle deleted:

image

  1. The calculated absolute chordTol is passed all the way down to SBezier::MakePwlWorker() in ratpoly.cpp where the actual check is:
    double d = pm.DistanceToLine(pa, pb.Minus(pa));

    double step = 1.0/SS.GetMaxSegments();
    if((tb - ta) < step || d < chordTol) {

Here it also checks the max segments.

Also, when exporting it just returns exportChordTol which is in mm, divided by exportScale:

double SolveSpaceUI::ChordTolMm() {
    if(exportMode) return ExportChordTolMm();
    return chordTolCalculated;
}
double SolveSpaceUI::ExportChordTolMm() {
    return exportChordTol / exportScale;
}

Problems with this approach

  1. The absolute chord tolerance depends on the size of your scene, which makes it behave in an unintuitive way.
  2. Sometimes you really do have a large scene with small features that you care about. For example if I am CNCing some holes in a tabletop, it doesn't matter that the table is huge - I still want my small features to look nice when zooming in on them.

Possible solutions

We have the existing constraints:

  1. Maximum number of linear segments per bezier segment.
  2. Distance tolerance, proportional to the scene size.

There are some other constraints that are possible.

  1. Angle tolerance (as mentioned above).
  2. Distance tolerance that is proportional to the size of the current bezier segment.
  3. Absolute distance tolerance (like for export).

My thoughts on the constraints are:

  1. Drop this. It means circles are never very smooth, even if they are huge and are well outside the distance tolerance.
  2. Drop this too. It's just confusing that tolerance depends on the size of your scene, even if it sort of vaguely makes sense.
  3. I think angle tolerance makes quite a lot of intuitive sense - it's the sharp corners that really look bad in the preview. But it might be hard to do just on its own because handling sharp beziers could be tricky (you'd end up with a load of tiny linear segments at the sharp corner). You could possible do something fancier, like a tolerance on the cross product of consecutive line segments?
  4. This seems like it might work to me - if you've drawn small features you probably want the tolerance to be proportional to those features. This is kind of how actual manufacturing tolerance works too.
  5. I think this would be useful as an additional constraint.

All these constraints can be combined in a boolean expression. Maybe one day the user could pick it. Anyway I am going to experiment with some different ones - I'll start with 4.

@Timmmm
Copy link
Contributor Author

Timmmm commented Mar 28, 2020

Oh, also if you don't use a constraint that is proportional to the scene size, you don't need to call GenerateAll(genForBBox=true).

@whitequark
Copy link
Contributor

I think your suggestion (4) makes a lot of sense. I'd also ask @jwesthues about it.

@jwesthues
Copy link
Member

You are correct that the max line segment count per Bezier segment (1) doesn't make any geometric sense. I intended it just as a sanity check, to stop the regeneration from becoming unusably slow if the user chooses a very fine distance tolerance, especially since there's no way to interrupt that. If you haven't hit such problems, then I suspect you don't make big solid models and would be happy greatly increasing (1) from the default.

Per above, distance as a fraction of Bezier length/bounding box/whatever (4) also doesn't make geometric sense--the Beziers will get chopped up by Boolean operations, and it doesn't make sense to represent the same physical curve finer just because it's now in two pieces. The Booleans assume a consistent distance tolerance, which is achieved only by (2) and (5). Perhaps (4) could replace line segment count as the sanity check, though it's perhaps more obvious to a user that "10000 segments" might get slow than that "0.0001 relative chord tolerance" will.

I'm not sure what you mean by a tolerance on the cross product? The magnitude of that has dimensions of length squared times angle, which does not seem useful. The combination of angle tolerance (for little things) and chord tolerance (for big things) is pretty common in professional CAD.

@Evil-Spirit
Copy link
Collaborator

Evil-Spirit commented May 10, 2020

  1. I've spent so many time to implement this...
    image

@whitequark
Copy link
Contributor

@Evil-Spirit Your work has served us very well so far, and adding an angle chord tolerance would be building on top of it, not replacing it. Or am I missing something?

@ruevs
Copy link
Member

ruevs commented May 10, 2020

Personally I do not mnid tiny (compared to the model size) octagonal circles/holes at all. It is obvious to (this) user that it is just an optimization for the visual rendering. Real tiny octagons are "rare" (nuts are usually hexagonal :-) so there is little risk for confusion. As for visually checking alignment - constraints are for that.

So i would not trade "smooth tiny circles" for any performance loss. If it is possible to do it without performance loss AND - more importantly - very complicated (potentially buggy) code - then fine.

I love Solvespace because it is small and the code is beautiful and (relatively) simple. This issue seems to go against this.

@phkahler phkahler added the mesh label May 11, 2020
@whitequark
Copy link
Contributor

@phkahler This isn't really a mesh issue, is it? The Bezier curves from entities are turned into piecewise linear curves independently from the solid model.

@baryluk
Copy link

baryluk commented May 16, 2020

@Timmmm and @Evil-Spirit Thank for putting a lot effort into your investigations.

I am not super familiar with solvespace internals, especially rendering, only some pieces of the solver and algebraic constraints, but I am pretty sure there is room for improvments, so the rendered can be more smart.

@ruevs octagonal ones bugs me a little bit. If it was 16-sided polygon I would be okish with it.

No commercial CAD program in the world really has pixel accurate rendering (which is actually very unfortunate), but I think there might be some interesting smarter options to look at.

How about when doing instantiation of the geometry for visualization, looking at the angles between consecutive line segments. And if it is too high (weighted by the zoom factor maybe?) backtrack and refine the approximation?

Another observation. People often edit the model, and then look at it, rotate, without changing any constraints. How about having a thread in a background asynchronously that do recalculations at higher level of detail and when it is ready update it. If new update comes in, cancel it.

Just some ideas. I work with solvespace on desktop, with good CPU and good GPU, so I rearly see performance issues, but more interactive experience is worthwhile for everybody.

@whitequark
Copy link
Contributor

whitequark commented May 16, 2020

How about when doing instantiation of the geometry for visualization, looking at the angles between consecutive line segments.

That was suggested in #295 (comment). It seems like a good route.

How about having a thread in a background asynchronously that do recalculations at higher level of detail and when it is ready update it.

At the moment the internals of SolveSpace aren't well suited for parallel, asynchronous processing, so this is probably not viable. It would also not work for the single-threaded Emscripten port, and given that it significantly affects UI, I'd much rather evaluate other approaches first.

@whitequark
Copy link
Contributor

That said, just like @Evil-Spirit says, starting to use geometric shaders will not directly impact the OpenGL 1 renderer, since it's separate from the OpenGL 3 renderer. It might after all be necessary to have a fallback in place for machines that only have DirectX 9 but not DirectX 11, or OpenGL 3 but not OpenGL 4. (How common are these? Does macOS provide geometric shaders via OpenGL, or only via Metal?)

It's just that I consider the OpenGL 1 renderer a pure liability and won't hesitate to remove it when it starts to cause any problems. That isn't going to happen until the 3.0 release, but is likely to happen the next time the rendering code needs to evolve.

@ruevs
Copy link
Member

ruevs commented May 18, 2020

Hi all,

Supporting XP has become eccentric - I understand. But I enjoy my machine booting to 165MB of RAM used. And as long as I have XP machines that are used and useful (right now I have two) I will keep a "fork" of SolveSpace that works on them.

I understand that gl3shader.c/h are not compiled when the OpenGL 1 renderer is configured and therefore any changes to shaders can not directly impact it. However the more that is done with shaders the more the OpenGL 1 renderer breaks - as Whitequark already pointed out. That said I simply stated that this makes me "sad", I did not say "don't do it". It''s interesting why this annoys you so much.

Now back to the OpeGL 3 renderer and it's shaders. I know almost nothing about OpenGL programming but Google knows and from what I have read if you want to do

Parameters of curve can be passed to shader and curve points can be computed on gpu.

This requires a tessellation shader and not a geometric shader. Tesselation shaders were introduced in OenGL 4 and thus using them will require graphics drivers that support OpenGL 4.

It seems Intel graphics started supprting OpenGL 4 in Gen7, which is Ivy Bridge. So for example people with a Sandy Bridge e.g Core i7-2600K 4 cores/8 threads, 3.40 GHz/3.80 GHz Turbo, 8 MB Cache will not be able to run SolveSpace. Really?!?

And if a fallback is considered necessary anyway is it worth having the shader for performance, the fallback and the added complexity of "detecting and switching"?

@whitequark
Copy link
Contributor

It's interesting why this annoys you so much.

There's nothing complex about that—the story of maintaining an open-source project is the story of dealing with boundless entitlement, passive aggression, and guilt trips from people benefiting from labor of volunteers. I maintain a lot more of them than you do (which is a wise choice on your part), am correspondingly exposed to more, and therefore have far lower tolerance for it.

Tesselation shaders were introduced in OpenGL 4 and thus using them will require graphics drivers that support OpenGL 4.

Yup, that's why I mentioned OpenGL 4 support.

So for example people with a Sandy Bridge e.g Core i7-2600K 4 cores/8 threads, 3.40 GHz/3.80 GHz Turbo, 8 MB Cache will not be able to run SolveSpace. Really?!?

Of course not. The reason I said that we "can" use geometric shaders is because the hardware and software support for them became widely available, in the sense that all of our target platforms, in principle, should be able to use them on recent hardware. The context you are probably missing here is that a few years ago, @Evil-Spirit suggested using OpenGL 4 features, and back then they were available only on select high-end hardware and on few platforms, so we couldn't use them at all, in contrast to the current situation. So at the time, I decided that we wouldn't spend any time trying to add OpenGL 4 dependent features at all, which has changed now.

And if a fallback is considered necessary anyway is it worth having the shader for performance, the fallback and the added complexity of "detecting and switching"?

I don't understand how can you argue, in two statements that follow each other, simultaneously in favor of having a fallback OpenGL 1 renderer for machines without OpenGL 3, and against having a fallback OpenGL 3 renderer for machines without OpenGL 4.

So another reason why your attitude annoys me is that your position seems to be that the complexity that benefits you is necessary or at least tolerable, whereas the complexity that doesn't benefit you is a drawback. You're, of course, free to consider solely your personal convenience, but as a maintainer I don't have this option.

@ruevs
Copy link
Member

ruevs commented May 18, 2020

As for Angle - as we have discussed - I know it can work under XP - Firefox 52.9.0esr is a living proof. Maybe I will spend the time to figure out how one day.

@ruevs
Copy link
Member

ruevs commented May 18, 2020

@whitequark "ACK" for all the technical points.
I apologize for annoying you. Last non-technical post from me:

I don't understand how can you argue, in two statements that follow each other, simultaneously in favor of having a fallback OpenGL 1 renderer for machines without OpenGL 3, and against having a fallback OpenGL 3 renderer for machines without OpenGL 4.

Just because the GL1 renderer exists, happens to still work OK and (as far as I am aware) causes no work or grief for anyone. So it is a zero overhead left-over that no one but me cares about and will bit-rot until removed.

So another reason why your attitude annoys me is that your position seems to be that the complexity that benefits you is necessary or at least tolerable, whereas the complexity that doesn't benefit you is a drawback. You're, of course, free to consider solely your personal convenience, but as a maintainer I don't have this option.

See above about the complexity. As for "my convenience" - I am not expecting or demanding anything from anyone - thus the "sad" statement above was useless and I should not have written it. I work on SolveSpace (less than I would like to) - when I have some time - because I like it, the architecture (mostly), the code, the UI - that's all.

Back to semi-technical arguments - Ivy Bridge was introduced in 2012 so if using OpenGL 4 features and dropping OpenGL 3 (and 1) will make SolveSpace faster and simpler I think it is better to do it instead of adding "fallbacks".

It's not my intention to reduce the S/N ratio, thus this is my last non-technical post. But I reserve the "right" to annoy anyone and everyone with technical arguments if I think it's important :-D

@whitequark
Copy link
Contributor

whitequark commented May 18, 2020

@ruevs Thank you for being considerate, and I should note that I appreciate both your technical arguments and your other contributions. I do note that what counts as "too complex" is inherently social, and should be, perhaps, handled with more care than usually happens in technical discussions—but by no means I want to shut down that discussion either.

if using OpenGL 4 features and dropping OpenGL 3 (and 1) will make SolveSpace faster and simpler I think it is better to do it instead of adding "fallbacks".

I think we are in agreement here: I also think that if dropping support for obsolete hardware simplifies the codebase, and if we can actually do it without impacting too many users (we'll never impact zero and that's just one of the unfortunate facts of life), we should do it. So the question is, what's the cost, and what's the impact?

We can never be sure of impact because we don't collect telemetry. We can only rely on hardware surveys by other projects, like Firefox or Steam, and those aren't necessarily representative.

Cost is a much easier consideration. I would not expect that a GPU-based Bezier renderer would result in major increase in complexity (wouldn't it be based on a pwl decomposition anyway?), but I could be wrong. In any case, adding a fallback wouldn't require forking the renderer, and so it's much easier to test and maintain.


While looking into this I discovered that even WebGL 2 doesn't actually support geometry shaders, it only has transform feedback. So the question of whether we'll need a fallback if we use GPU-based Bezier rendering is moot: the answer is yes.

@jwesthues
Copy link
Member

I'm afraid we may be discussing this into exhaustion. But just to compound that, I think I see people doing Beziers just with vertex shaders, with a fixed number of line segments per Bezier? The fixed segment count is wasteful, but maybe the whole thing then gets fast enough we don't care, or we could use simple heuristics to choose from a small number of fixed segment counts?

Or just more line segments on the CPU of course (for loose curves only, not for the solid model). I suspect all of that is near-infinitely fast compared to the solid model.

@whitequark
Copy link
Contributor

Yeah, the discussion veered into geometry shaders without much consideration to whether we can already do it. I think an approach using a fixed segment count would be entirely acceptable while not requiring anything beyond existing OpenGL 3. I remember seeing quite a few options; besides the ones I've mentioned earlier I think you can also do this with signed distance functions.

@phkahler
Copy link
Member

I'll just chime in again, and then I'm moving on to better things. IMHO:

  1. We don't need geometry shaders. Drawing curves is not the bottleneck here.

  2. If the curves are not rendered with the same number of segments as the surfaces they are attached to, we're going have all sorts of Z-buffer issues. And if those could somehow be fixed, people would complain about the other funny effects that will have.

  3. When increasing the minimum from 2 to 4, why do the surfaces not follow?

  4. the performance implications are not trivial. I have a design with a grid of holes. I made them square because SS choked on that many round holes. The performance is still bad with squares. This is not about tessellation, but about algorithmic complexity. There are areas in the code that have straight forward algorithms (good) that are not as efficient as they could be.

  5. NURBS are easy. Trimmed NURBS are hard. NURBS booleans are even worse.

  6. OP needs to lower chord tolerance for now.

@whitequark
Copy link
Contributor

2. If the curves are not rendered with the same number of segments as the surfaces they are attached to, we're going have all sorts of Z-buffer issues. And if those could somehow be fixed, people would complain about the other funny effects that will have.

You're completely right, and I missed the Z-buffer implications. I suspect this rules out any solution that doesn't ultimately amount to a more sophisticated pwl decomposition.

@whitequark
Copy link
Contributor

4. the performance implications are not trivial.

Not to disagree with you, but to refine this statement: once we tie the Beziers to the mesh, the performance implications become correctness implications as well. I've had a design with a combination of small holes and large holes, and it was not trivial to find a display chord tolerance that wouldn't cause booleans to fail. (Then again for export, but what we currently do for export is good.)

This was after we tied display chord tolerance to a percentage of entity bounding boxes in 34a5d87. I'd have to investigate but I suspect that is not sufficient in some cases, though it certainly makes things a lot better.

@Evil-Spirit
Copy link
Collaborator

I think I see people doing Beziers just with vertex shaders

I wrote about such approach - we can use it without geometrical shaders.

@phkahler
Copy link
Member

@jwesthues Can you explain why changing circles to a minimum of 16 segments (4 per curve) results in the issues shown above by both @ruevs and myself? Is there a mismatch between this and some other part of the code? and why does it not make squares from octagons? This seems important to understand for people working on the innards.

@jwesthues
Copy link
Member

@phkahler No idea without actually debugging, I'm afraid. What happens if you just modify the worker to keep recursing until tb - ta <= 0.25?

@phkahler
Copy link
Member

phkahler commented Jun 2, 2020

I found the problem with 4-segment arcs. At the top of function:

void SCurve::RemoveShortSegments(SSurface *srfA, SSurface *srfB) {
    // Five, not two; curves are pwl'd to at least four edges (three points)
    // even if not necessary, to avoid square holes.
    if(pts.n <= 5) return;

This is my fix. It used to say "three, not two" because curves had minimum 2 segments "to avoid square holes".

Together with this change over in raypoly:

        // unhappy with sixteen sides (formerly octagons).
        MakePwlInitialWorker(l, 0.0, 0.25, chordTol);
        MakePwlInitialWorker(l, 0.25, 0.5, chordTol);
        MakePwlInitialWorker(l, 0.5, 0.75, chordTol);
        MakePwlInitialWorker(l, 0.75, 1.0, chordTol);

We can have all circles with minimum 16 segments (4 per curve). The function in curve.cpp was deleting short segments on the pwl, but only for one surface? I thought pwl were shared by the two surfaces, so deleting segments from one affects both. Anyway, by forcing more edges that go below chord tolerance, that other function needs to be changed or it will delete edges.

I have no idea if there are any negative consequences for doing this. The comment above RemoveShortSegments suggests that operation is necessary because short segments can break booleans. That suggests to me the possibility that some boolean errors may be caused by the early exit leaving small edges that break booleans. That would also mean that increasing the minimum edges to 4 there might make things worse (more early exits).

It seems like there should be a more elegant way to remove the short edges while not breaking booleans and not reducing circle edges. @jwesthues @ruevs @Evil-Spirit Any ideas?

@jwesthues
Copy link
Member

Thanks, had forgotten about that. RemoveShortSegments() is supposed to fix e.g. a case where the original edge is from (0, 0) to (0, 1) to (1, 1), and then due to an intersection we get an extra point really close to one of the originals, like (0, 0) to (0, 1) to (0.0001, 1) to (1, 1). In that case, we want to effectively coalesce the two very nearby points, because very nearby points may break the Booleans.

Edges smaller than the chord tolerance are no problem; it's when edges get small compared to our numerical epsilons that stuff can break, and that's much smaller. So I wouldn't expect any new problems from more segments in our circles. But I think it would be better if RemoveShortSegments() removed the segment count test, and just applied e.g. 1/10 the normal chord tolerance? As things stand, you could already still have the actual bad case with just three points, and we wouldn't fix it. Increasing the segment count further in that test would increase the risk of that problem.

And I'm not sure what you mean by "for only one surface"? In the pictures, I thought the smoother circles were the bare entities (independent of the solid model), and the solid model was consistently square. If the two surfaces' trim curves are being treated differently (and thus creating a non-watertight mesh?) then that would definitely be a bug, and one that potentially affects the Booleans.

@phkahler
Copy link
Member

phkahler commented Jun 2, 2020

And I'm not sure what you mean by "for only one surface"? In the pictures, I thought the smoother circles were the bare entities

Yes, that was a mistake in my memory. When I first saw it I thought the cylinder was keeping all the edges but it's just the curve and none of the surfaces (still seems a little weird). I also had the idea of using a fraction of the curve tolerance there (I don't want to speak too much, especially when soliciting ideas). The original pwl points are presumably correct WRT curve tolerance, and the split point will be at the end. The only points that might be removable should be the ones adjacent to the ends then? Maybe only if they're geometrically very close to the end point?

@jwesthues
Copy link
Member

A split can occur anywhere along the curve, since a curve in shell A can get split anywhere that it intersects a surface from shell B (and surfaces in shell B can be anywhere).

@phkahler
Copy link
Member

phkahler commented Jun 2, 2020

I mean the problematic points should be near the end of the portion of the PWL after it's split. There may be 20 points and it splits right next to the 2nd one so we get a pwl with 3 points - 2 original ones and one at the splitting surface. Or the remaining part could have 10 points but the only ones in question should be near the split plane.

@jwesthues
Copy link
Member

First, note that the beginning of an SCurve isn't necessarily the beginning of the trim curve. There's only one SCurve per physical curve, even if that curve trims multiple surfaces (as it always does in a closed shell). The multiple surfaces may use different subsets of that curve, as specified by an STrimBy. This is the mechanism that causes the multiple trim curves of multiple surfaces to be piecewise-linear interpolated through the same points, without which our mesh wouldn't be watertight.

You can also construct geometry where a curve really does need to get split in the middle, e.g. a cylinder minus a triangle inscribed in that circle and extruded (though many such cases will also break for other reasons). I think curves get split unnecessarily too, though it's been a long time. In any case, the short segments definitely aren't just at the beginning or end of the SCurve, nor even of the STrimBy.

@phkahler
Copy link
Member

phkahler commented Jun 4, 2020

I have a solution in PR #626. It keeps non-vertex point if the curve is exact and the distance in t-parameter between points is more than 0.1 regardless of chord tolerance. Such points should only have been added deliberately as in the circle/octagon code. This allows removal of the early exit based on number of points.

@phkahler phkahler linked a pull request Jun 12, 2020 that will close this issue
@phkahler
Copy link
Member

Also including an update so 2nd or 3rd degree curves have a minimum of 4 segments. Circles will have at least 16 segments so we shall close this issue with that PR.

There will also be another PR to reduce default curve tolerance to a more useful value.

@whitequark
Copy link
Contributor

Okay. I'd like to take a close look at those before merging, please let me know when you're done.

@phkahler
Copy link
Member

@whitequark woops, I merged this one before I saw your post. I'll hold off the other one, which is having conflicts due to this anyway.

whitequark pushed a commit that referenced this issue Jun 14, 2020
Before this commit, with the highest chord tolerance settings, circles
would render as octagons, which confused a lot of people. See #295.
@ruevs ruevs self-assigned this Aug 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants