Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use dear imgui for user interface #263

Closed
traverseda opened this issue Jul 13, 2017 · 57 comments
Closed

Use dear imgui for user interface #263

traverseda opened this issue Jul 13, 2017 · 57 comments
Labels
documentation internals UI Web The emscripten based web version of SolveSpace wontfix

Comments

@traverseda
Copy link

It's decent, looks easy to integrate, and is platform agnostic. It seems perfect for platforms like javascript/emscripten and android, and more or less equivalent to the current non-native widgets.

Of course native integration with things like toolbars are nice.

@whitequark
Copy link
Contributor

So. Right now we already have a homegrown immediate mode UI library. It has obvious deficiencies (like not supporting RTL or ligatures in fonts), but neither does imgui! Moreover, we currently support input methods because we use platform's edit widgets on every platform, but imgui does not. So it's a lot of added code for what's essentially a regression in accessibility.

I'm not inherently opposed to using this library and anyone reading this is free to discuss it, but for now I see no reason to migrate to it either.

@traverseda
Copy link
Author

It seems to me like using native widgets on each platform is going to be very difficult to maintain.

I've found multiple references to IME in the imgui releases page, but that feature seems to be undocumented.

InputText(): Replace OS IME (Input Method Editor) cursor on top-left when we are not text editing. is one.

It looks like it has some undocumented method for IME, but I'm not sure how that would work.

But even if it doesn't support it nativily, that doesn't seem like a problem. If you were to implement your main window on top of something generic like SDL, you should have an easier time implementing IME in a cross platform way, passing IME up the stack.

According to the relevant docs, you can read the 'io.WantCaptureXXX' flags in the ImGuiIO structure. When imgui want to capture, run something like 'SDL_StartTextInput' to enable IME. This is also basically how you'd trigger the on screen keyboard in mobile devices.

That would mean mostly not maintaining separate widget toolkit code for each platform, which I think would save a non-trivial amount of development time once the native-UI components get more complicated. Unless you were planning on standardizing on one cross-platform native-ish UI toolkit?

Ligatures and RTL are both more pressing problems. It does look like it's something that they're interested in working on however. For now, it looks like it should support any ligature with it's own unicode code point, which should be most of the relevant ones. I've opened a bug report for RTL text here.

@whitequark
Copy link
Contributor

If you were to implement your main window on top of something generic like SDL, you should have an easier time implementing IME in a cross platform way, passing IME up the stack.

We already have a working implementation of a platform abstraction layer that functions exactly as I want. Why should I add SDL to the mix? It's even more code, more bugs, more bloat, etc.

As another example, SDL may give me native IMEs but it does not give me native menus and keyboard accelerators, which e.g. will not fit into the new GNOME interface (where the menu attaches to the top of the screen and not window header). So SDL would make platform integration worse.

Android needs a ground-up UI redesign for touchscreens, I don't need nor want portability there.

That would mean mostly not maintaining separate widget toolkit code for each platform

I don't. Most UI code is shared and uses OpenGL, much like imgui. We actually could use imgui, or nanovg, or whichever, for drawing everything that isn't text, and I'm not opposed to that at all--it will make some things cleaner--but I don't see a particularly compelling reason to do this either.

What specifically do you need dear imgui for? Are you looking to implement a particular feature?

@whitequark
Copy link
Contributor

whitequark commented Jul 13, 2017

Ok, If you really want to use imgui, feel free to start the conversion with "toolbar.cpp". If the code actually comes out cleaner architecturally, or at least I see the potential for it to be cleaner architecturally, I will gladly migrate to this library. It doesn't matter if the particulars of your implementation are inelegant, I can look through that.

@whitequark whitequark reopened this Jul 13, 2017
@ghost
Copy link

ghost commented Jul 14, 2017

Did imgui work with GPU limited to OpenGL1.x?

Or imgui require OpenGL2.0?

@whitequark
Copy link
Contributor

dear imgui outputs vertex buffers, which are not in OpenGL 1.

@traverseda
Copy link
Author

traverseda commented Jul 14, 2017

What specifically do you need dear imgui for? Are you looking to implement a particular feature?

Not as such, it's more an architectural thing. The current immediate mode GUI sucks, and it looks like it's going to suck for third-party contributors as well. I'm skeptical of the idea of using a bunch of different native widget toolkits to implement the more complicated UI elements. That seems like it would quickly end up being very hard to maintain. I'd like the limited developer time to be on other stuff, like tree-based layers and a new file format.

Maybe I'm mis-estimating, but I imagine a lot of developer time is going to end up being sent on OS support. But it seems to me like the best way to make it the project more accessible is to compile it for the web with emscripten. And that's not going to work with well with native widgets.

So I suppose the big features I'm looking to support with ImGui is faster prototyping and web browser support.

@whitequark
Copy link
Contributor

And that's not going to work with well with native widgets.

Huh? Of course it would, I'll just use an <input>.

@traverseda
Copy link
Author

traverseda commented Jul 14, 2017

As I understand it, your HTML inputs would essentially be running in a different process, so what you're actually talking about is implementing a whole new UI in html/JS, and including some kind of IPC mechanism (Presumably whatever emscripten uses by default)? Am I understanding that correctly?

@whitequark
Copy link
Contributor

Not at all. Take a look at a WIP branch that abstracts the details of the exact platform. There's a tiny API that needs to be implemented to add support for another window system.

@traverseda
Copy link
Author

So your plan is to first write a generic cross-platform UI toolkit that outputs UI elements native to each platform? Even presuming that you're only targeting a small subset of functionality, that's still a pretty complicated thing. Easy enough when you're only targeting menus and menu bars, but more complicated when you start talking about things like tree editors.

I mean, if that's what you'd like to spend your time doing, then sure. But it seems like a full time sort of thing, and like something that's going to take a lot of maintenance overhead. And I don't really see the advantage. That abstraction is going to be pretty leaky, or high-enough level that you end up basically implementing a whole new UI for each platform anyway, you just break it up into a few (large scale) reusable pieces. So implementing the layer-picker as a cross-platform widget instead of the widgets that the layer picker is composed of. That's not really much better for a maintenance standpoint.

There have been a number of attempts to do a cross-platform rendering of native widgets, and sure, targeting a limited subset is going to help. But it still seems an awful lot like a quagmire to me, especially once you get past the most basic functionality. Maybe there is a sweet spot there, maybe you'll hit it successfully, but it seems like a big chunk of code that doesn't need to get written. Most of the groups I see using that technique have a lot more developer resources, and are willing to just pay the higher developer/maintenance costs.

I see the advantage, but I think it's going to be pretty expensive to actually use that advantage. I mean, look at other open-source tools in this sphere.

My ideal would be progressive enhancement. Default to imgui, but for things like filepickers progressively enhance to the native filepicker when you can. Same with the toolbar on mac os.


But we're well past the point of hard data, and into the field of opinions and intuitions. A bunch of subjective stuff. I'll hopefully find the time over the next few days to prototype toolbar.cpp in imgui, and you can make a judgment then.

Today I'm probably going to be doing some work on modoboa.

@whitequark
Copy link
Contributor

whitequark commented Jul 14, 2017

So your plan is to first write a generic cross-platform UI toolkit that outputs UI elements native to each platform? Even presuming that you're only targeting a small subset of functionality, that's still a pretty complicated thing. Easy enough when you're only targeting menus and menu bars, but more complicated when you start talking about things like tree editors.

Please understand that:

  1. I already wrote this, it needs just a bit more polish to be published, and
  2. I do not intend to ever go beyond text editors, menu bars, tooltips and file pickers.

The UI elements I've described are in a way privileged. Menu bars interact with accelerators and are overlay windows, tooltips are overlay windows, text editors need IMEs, and file pickers have way too many edge cases to implement myself (plus people hate non-native ones). No other elements need this degree of integration with the OS, they can be all done in literally any OpenGL immediate mode UI library we want. I have no real preference there other than disliking the current one, which has many flaws.

This is a known and proven technique used by successful software, e.g. Sublime Text implements portability in exactly the same way. (They use a slightly different technique for editors.)

So implementing the layer-picker as a cross-platform widget instead of the widgets that the layer picker is composed of.

Implementing any text window functionality through platform widgets would be insane and I never had any intention to do it. I do not understand what gives the impression otherwise.

But we're well past the point of hard data, and into the field of opinions and intuitions.

I don't think we actually have any fundamental disagreement. I've been writing cross-platform software for many years, and am well aware of the things you describe. Nor did I decide on the platform/gui.h abstraction arbitrarily, it's exactly as large as I think it should be.

@whitequark
Copy link
Contributor

To add to the above, something I would be quite glad to get rid of is the platform scroll bar. It's a nightmare to interface with and it blocks #39, and #39 blocks Emscripten integration because Emscripten can only create exactly one WebGL context.

@traverseda
Copy link
Author

traverseda commented Jul 14, 2017

Well then find myself pretty confused.

It sounds like we're both describing exactly the same thing, use an immediate mode gui for the complicated widgets, progressively enhance that "special class" of widgets for particular platforms.

The big difference is, roll your own that handles IME using native widgets, or use an established/documented project and do whole window IME? To me, it's pretty strange that one wouldn't jump at the chance to push some maintenance/documentation costs off to a third party, so I assumed you were trying to do something different.

@whitequark
Copy link
Contributor

To me, it's pretty strange that one wouldn't jump at the chance to push some maintenance/documentation costs off to a third party, so I assumed you were trying to do something different.

It's not really a reduction in cost. For example, if I add SDL, I have to figure out how to integrate it into the CMake build system (we use CMake exclusively so that targeting Windows and cross-compilation stay sane), then I have to figure out how to integrate it with ANGLE to support Windows machines with no OpenGL 2+ drivers, then I have to figure out how to add native menus and tooltips to it, and then it's going to slow me down every time I do a full recompile.

Where's the reduction in cost? The platform abstraction we currently have is a cost that's already paid, and it doesn't really need any maintenance.

It sounds like we're both describing exactly the same thing, use an immediate mode gui for the complicated widgets, progressively enhance that "special class" of widgets for particular platforms.

Not quite. I've decided which widgets get special treatment from the outset. I intend to draw everything else via an immediate mode GUI, period.

@traverseda
Copy link
Author

traverseda commented Jul 14, 2017

Wall of text coming, and I'm not describing it well, so for that I apologize.


There's no individual thing that you couldn't do yourself, but working as part of a group? You're expecting other programmers to learn a unique project-specific immediate-mode gui, instead of something portable between projects. Right now that's pretty reasonable, but hopefully this project is going to get bigger.

Every increase in expressiveness brings an increased burden on all who care to understand the message.

~ We need less powerful languages.

See also "There should be one-- and preferably only one --obvious way to do it."

ImGui is a less powerful unique language, and it's decently documented. Once you get a whole lot more features, and a lot more widgets, that's going to be important.

Imagine you've got expression evaluation working, so you can make fully parametric parts. If you wanted to make, I don't know, a screw thread, you could define a length, diameter, and pitch.

Some guy wants to add support for parametric sub-parts. In the layer view, your layer could be a "symlink" to an assembly, with some arguments. Parametric parts as functions, easy enough.

The guy who writes that code, how do you think he'd approach the project if he's dealing with an undocumented UI system? How do you think he'd approach it if he can ask question in the dear-imgui IRC channel?

Let's say that you decide you need RTL text. You implement it in your custom UI. Nothing much happens. You implement if for imgui, suddenly a bunch of projects get much better support for internationalization and accessibility improves across board. It takes a bit longer.

Let's say somebody needs much bigger text in UI elements, or high contrast text. Do they drop an imgui.ini file in their home dir (and have it work across applications) or do they submit a request for a bunch of new UI customization options? For each app they work with? (I consider dealing with cases like that to fall under the heading of maintenance)

It's true that rolling your own can be easier, especially if it's a small project with a few contributors.

The question is less "why should I switch to dear-imgui" and more "why should contributors switch to my custom UI". A contributor that learns your custom UI has done just that, a contributor that learns imgui for your project has a valuable skill they can take other places.

Of course designing software is hard, and there are always trade-offs to be made, but I think it's important to apply the dry principle. Duplicated code sucks for the reasons duplicated code has always sucked. Even if it's duplicated across projects. That's pretty much the only reason open-source can compete with proprietary software, the network effects of code-deduplication. There's something happening with the game-theory of FLOSS that let's it compete with proprietary stuff, and it can be happening more with this project.

Sometimes we need to duplicate code anyway, but I don't think this is one of those times.

@whitequark
Copy link
Contributor

The question is less "why should I switch to dear-imgui" and more "why should contributors switch to my custom UI".

We don't really have an UI library. It's just a bunch of code that draws rectangles. I have absolutely no allegiance to it and I've repeatedly said that it makes my life harder. I would obviously never advocate for anyone to switch to it or learn it because, well, there isn't a coherent thing to switch to in the first place.

I honestly don't know what or whom you're arguing with, I've agreed to switch the project to dear imgui, provided it's actually cleaner, in the very first comment.

We actually could use imgui, or nanovg, or whichever, for drawing everything that isn't text, and I'm not opposed to that at all

@traverseda
Copy link
Author

Well, all that was answering a very specific question

Where's the reduction in cost?

And for the sake of pedantry, the comment where you agreed to migrate is the fifth ;p

I'm not so much arguing as trying to make sure that there is a benefit. Like I say, I suck at cpp. Mostly I'm doing dynamically typed languages. So this is probably going to take me 10 hours minimum, for what the docs describe as taking "around an hour". I'm probably not going to be able to take it much beyond toolbar.cpp, so it's important that you're on board.

Less argument, more making sure I understand the plan (I didn't until I clarified) and making sure that you're actually are on board, since it's almost definitely going to require you to do some work.

@whitequark
Copy link
Contributor

Well sure, like I already mentioned the homegrown UI breaks down almost completely for #39 (I actually tried to implement #39 already and failed), so some sort of new solution is needed. dear imgui might just be it, let me glance at its API I guess.

@eric-schleicher
Copy link

eric-schleicher commented Jul 14, 2017

This thread is immensely interesting to me. to help educate me. is the idea here simple that an external immediate mode UI system brings more widgets with better supports?

Or something much more esoteric, like drafting solvespace through an emscripten pipeline?

The idea of getting the use of solvesapce ui and constraint solver in javascript from webVR is tremendously interesting. is that where this leads?

@whitequark
Copy link
Contributor

is the idea here simple that an external immediate mode UI system brings more widgets with better supports?

Correct.

Or something much more esoteric, like drafting solvespace through an emscripten pipeline?

Indirectly, it does, see #263 (comment).

The idea of getting the use of solvesapce ui and constraint solver in javascript from webVR is tremendously interesting. is that where this leads?

I want this for outreach, enabling people to use solvespace without installing it. Especially with Windows 10 and stuff, downloading and running executables just isn't cutting it anymore in today's world.

I personally have no interest in VR, skills required to make VR work, or for that matter money for a VR rig.

@traverseda
Copy link
Author

traverseda commented Jul 14, 2017

There would need to be some significant work done (in c++) to get this running under webVR.

The biggest challenge is that the webVR spec doesn't appear to have any interaction. So you might be able to view a scene, but the scale would most likely be off, and you wouldn't be able to edit it (through the webVR spec), even if you had a fancy 3D-space controller. This is the list of webVR event types.

The specs are not mature enough yet to produce anything more than a viewer. Any VR rig expensive enough to let you edit wouldn't let you edit over webVR.

@eric-schleicher
Copy link

eric-schleicher commented Jul 15, 2017

The specs are not mature enough yet to produce anything more than a viewer.

Well the WebVR spec shouldn't have interactions as it's intended to only provide the basic interface to enumerate the hardware and provide the plumbing for HMD and tracked input devices, which it does quite well as this point.

Translating the immediate UI data (in the source form of vertex buffers) into threejs objects would be trivial. This leaves it very open for libraries like A-frame to expose next level level up functionality to the ECF and scene graph and then with it's directives to provide the interaction controllers.

If an interface to solvespace's entities and solver were to be available, the A-frame components could manage the flow of input to solvespace model and represent the synchronization in the scene graph.

In this fashion javascript A-frame components could handle the interaction with whatever solvespace APIs are available. For example to loading & saving (from/to XHTML), and manipulating the DOM and to request actions in the solvespace state.

So the questions is: If exposed through to javascript, how would one interact with solvespace? would pseudocode like the following be possible (as a practical example).

On an (already existing) plane, add a line entity; and define a horizonal constrain from javascript:

SLVS = require('solvespace');
var myPlane = SLVS.planes[0]   // or SLVS.planes["XY"]
var lineStart = new SLVS.vector2(1,1);
var lineEnd = new SLVS.vector2(5,1);
var myNewLine =new SLVS.Line(lineStart, lineEnd);
var thatLinesHorizConstraint = new SLVS.horizonal(myPlane), 
     myPlane.addEntity(myNewLine);
     myPlane.addEntity(thatLinesHorizConstraint);

If so then building the controllers to drive the interface would be quite straightforward as both 2d and fully Spatial interface (VR)

Would this pseudo-code be representative of an emscripten built interface in javascript? that would be hella cool.

WRT

Any VR rig expensive enough to let you edit wouldn't let you edit over webVR.

Kindly, I offer that there is no distinction between the caliber of computer required to render basic scenes in VR and to Edit the most complex models solvespace is capable of producing. that's a statement that any VR capable machine has plenty of go-joice to handle rendering solvespace models.

We have WebVR based tools with many millions of points being rendered and shaded at VR refresh rates in a browser on mainstream hardware. Considering how light Solvespace is on UI generally and how few GL features it uses... There is 0 risk of needing a heavy duty VR rig to run it.

@traverseda
Copy link
Author

traverseda commented Jul 15, 2017

any VR capable machine has plenty of go-joice to handle rendering solvespace models.

I wasn't speaking so much about "go-juice" as "spacial controllers". But point taken.

You'd interact with it through preamble.js, as I understand it. So a bit more of a pain in the ass then your pseudo code, but probably pretty reasonable.

If you wanted to pull the "render data" out of solvespace, and render it yourself, I do know that there's a solvespace as a library thing. I don't know how up to date it is.

There's also a headless option, and you could pull all kinds of data out of it, using preamble function calls.

But probably what you actually want is to build a whole new things, and to use solvespace as a library.

@whitequark
Copy link
Contributor

If you wanted to pull the "render data" out of solvespace, and render it yourself, I do know that there's a solvespace as a library thing. I don't know how up to date it is.

This is just the solver (think insides of one group), not complete SolveSpace. Complete SolveSpace will ~never be available as a library, the effort/benefit ratio is too low.

But probably what you actually want is to build a whole new things, and to use solvespace as a library.

What he wants is a new platform, like GTK and macOS.

@whitequark
Copy link
Contributor

@traverseda So, I looked at imgui. I think the problem is going to be font rendering. It really should be using freetype instead of stb_truetype. Maybe we could do with a fork of imgui...

@traverseda
Copy link
Author

Somebody has already done some work on using freetype. See this thread and this repo.

Personally? I wouldn't bother too much. It can be fixed, but it's probably good enough for now.

It really doesn't look that bad in practice. Personally, I'd throw something together quickly and move my focus to the hierarchal layers project. I don't think slightly ugly font-rasterization is going to hurt accessibility, at least while you're still in the "find early adopters, get more support" stage.

@eric-schleicher
Copy link

FWIW, I'm with @whitequark. the percieved benefit of the imgui would be new UI functionality and supportability, not aesthetics, which i think aren't an improvement over the current solvespace UI.

@whitequark
Copy link
Contributor

I moderately like the API of imgui but frankly that's about the only thing I want to drag into SolveSpace. It uses global state (ew), doesn't use FreeType, doesn't do RTL, tries to do IME in weird ways... naaah.

It's really not hard to do some coordinate transforms while building up an UI and then associating state with hierarchical string identifiers. We have most of the infra for that already, and I wrote similar UIs before.

@Evil-Spirit
Copy link
Collaborator

Evil-Spirit commented Aug 6, 2017

@traverseda,
@whitequark
What I have understood, is the following:

  1. We can use imgui only for textwindow for every platform. This is massive refactoring with significant code eliminating. This allows an easy way to implement gui-related features: one code line for text/value editing in single place, instead of adding a couple of lines in diffenent places/files (Edit enum + ugly Printf + editing function + apply function). Also, during this process we can get full localization.
  2. We can use imgui for every gui element (toolbar, menu, context menu, etc) as new platform (can be build for every plaform for same-appearance, but the main target is web browser)

This two points is different. We can implement just 1) without implemening 2). We can implement 1) Using native input modes for native-look-and-feel builds (exactly like it works now) and using imgui input modes for the other build-modes. We can just implement our own imgui element for input and it can work as we want for every of this cases. There is no any needs to rewrite all platform code into SDL or other things. For the firt step I suggest to substitute TextWindow ui with imgui and achieve:

  1. Easy and transparent way to implement gui-features
  2. Smaller project code-database
  3. Easy way to review contributions (Can you say quickly is "min %@ %Fl%Ll%f%D[change]%E" ok?)
  4. Full localization
  5. Less memory consumption

@whitequark
Copy link
Contributor

@Evil-Spirit You are absolutely correct in your analysis. The only thing I want to do differently is to use a template-based immediate mode UI (like HTML) instead of code-based (like imgui) because it is really hard to localize code-based immediate-mode UI if you need to e.g. reorder a few elements for a different language.

@whitequark
Copy link
Contributor

And I fully agree that the current approach using the Printf function is intractable and we should migrate from it ASAP. This is actually what I was working on in the platform branch before I had to temporarily cease development.

@Evil-Spirit
Copy link
Collaborator

because it is really hard to localize code-based immediate-mode UI if you need to e.g. reorder a few elements for a different language.

Why is this harder to localize(except some reordering and such things)? What libararies should we use?

@whitequark
Copy link
Contributor

Why is this harder to localize(except some reordering and such things)?

Reorderings, RTL scripts, stuff like that. I think our own library would be best for this, it's really not a lot of code and I can look at imgui for inspiration. We have large chunks of it already implemented, e.g. the Canvas interface and batching of primitives, would be silly to not take advantage of that.

@dumblob
Copy link

dumblob commented Aug 13, 2017

I read the whole discussion and totally agree with @whitequark's decisions.

Though should there appear any further doubts, just take a look at Quarks as an "ultimate" successor of Nuklear (which itself is a significantly better architectured, cleaner, leaner, faster and more KISS competitor to dear imgui feature-wise). Quarks supports all three modes of UI specification:

  1. pure declarative (i.e. template-based/retain-mode)
  2. semi immediate (the templates are constructed in run-time - in the best case just at the initialization/startup of the app, in the worst case each frame; the rest is handled the same as in the pure declarative mode)
  3. pure immediate (i.e. code-based; everything is handled immediately/just_in_time)

Imgui supports just (3). Nuklear is a mixture of (2) and (3). Quarks supports seamlessly all (1) (2) and (3).

@traverseda
Copy link
Author

My general view is that communication is very often more important than getting the "best thing". More powerful, more flexible, languages are often worse because there's a much bigger "space" the function could live in. It takes longer to on-board new people, and the pipeline getting developers on-board with your project involves making that first PR as easy possible.

I will note that Quarks doesn't seem to have much in the way of documentation or examples.

I'm not tied to dear-imgui in particular, it's just one of the better documented projects right now. What I am tied to is making the UI on top of something reliably documented and maintained by a third-party. Anything else seems like a pretty obviously bad choice given the constraints on this project.

Unfortunately I'm too busy to do anything on that for the next two weeks.

@Evil-Spirit
Copy link
Collaborator

I will note that Quarks doesn't seem to have much in the way of documentation or examples.

I have looked into sources and I can say only what I don't want to solve any problems with this code if this will happen. I don't like this code. Imgui inside looks better for me.

@Evil-Spirit
Copy link
Collaborator

Evil-Spirit commented Aug 14, 2017

Imgui supports just (3). Nuklear is a mixture of (2) and (3). Quarks supports seamlessly all (1) (2) and (3).

This is not the reason why use it. The framework must be (at least):

  1. Well documentated
  2. Propular
  3. Simple code inside and outside

The C language and the code-manner used by C developers looks like a garbage for me. I don't like C
evangelists because they are annoying and fight for prure C, but without reason. Who have any real reasons fights in real battles (linux core or somthing where still there is place for pure C).

@dumblob
Copy link

dumblob commented Aug 14, 2017

I don't like this code. Imgui inside looks better for me.

That's very subjective. I have e.g. an opposite view 😉 But anyway, it's irrelevant as what counts is the code semantic quality and way less the syntax.

fight for prure C, but without reason.

Quarks (and Nuklear) have one very strong reason to use C. Namely extremely easy, lightweight and fully multiplatform embedding (with platform is meant both SW and HW - from tiny 8bit CPUs with no OS and no libraries, not even C standard library, up to highly powerful workstations with most modern OSes and libraries etc.). This also includes bindings for any existing programming language (C++ with it's dynamic nature can't ever compete in this field, because by using non-dynamic features, most of the C++ advantages would get lost).

Note Quarks is not even alpha yet, but it goes extremely quickly forward (it's an extremely tiny code base and it'll stay so thanks to its architecture) - therefore I've written:

... and totally agree with @whitequark's decisions (i.e. stay with the current approach for the time being)
... should there appear any further doubts (which according to the discussion above could appear first in a year or more)

@Evil-Spirit
Copy link
Collaborator

Evil-Spirit commented Aug 15, 2017

That's very subjective. I have e.g. an opposite view 😉 But anyway, it's irrelevant as what counts is the code semantic quality and way less the syntax.

Ofc. Any library or software is someone's opinion. Opinions can't be the same for the different people, that's why we have a hundreds of software programs that does the same but in different ways. SolveSpace looks like imgui, not like quark. This is not an argument, but just my opinion.

from tiny 8bit CPUs

We will never run SolveSpace on 8bit CPU. That's why we don't need to choose library which was designed considering such requirements. This can be a bottleneck in the future.

@dumblob
Copy link

dumblob commented Aug 15, 2017

This can be a bottleneck in the future.

Or a great advantage 😉 as it shows a perfectly scalable design (as I wrote above - ...up to highly powerful workstations with most modern OSes and libraries etc.).

I'm curious how the world will look like in a year or two regarding programming libraries of all kinds.

@Evil-Spirit
Copy link
Collaborator

Evil-Spirit commented Aug 16, 2017

Or a great advantage

Great advantage is not to use one more library in project which initally was designed without any dependencies.

@whitequark
Copy link
Contributor

I don't think dependencies are inherently bad, especially dependencies that don't have platform-specific functionality. At the same time there's a lot of really shoddy code written under the banner of minimalism.

Folks, this has dragged on and on without any consensus. I have heard your arguments and responded to them, let's stop wasting time on talking.

@traverseda
Copy link
Author

Given that I'm not going to have time to do any real work on this for a while, and you have heard all these arguments, I'm going to close the issue.

@imrn
Copy link

imrn commented May 9, 2019

Freetype support in imgui..
ocornut/imgui#618

@whitequark
Copy link
Contributor

As I understand it, your HTML inputs would essentially be running in a different process, so what you're actually talking about is implementing a whole new UI in html/JS, and including some kind of IPC mechanism (Presumably whatever emscripten uses by default)? Am I understanding that correctly?

Incidentally you can look at the Emscripten port in #419 now.

@traverseda traverseda reopened this Sep 22, 2020
@ruevs ruevs added documentation Web The emscripten based web version of SolveSpace labels Aug 22, 2022
@ruevs ruevs mentioned this issue Aug 31, 2023
15 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation internals UI Web The emscripten based web version of SolveSpace wontfix
Projects
None yet
Development

No branches or pull requests

8 participants