There are a lot of tests on GPU performance, in terms of how many operations per sec (rendering, float arithmetic, etc..), but not a lot on quality and precision. Most would argue that such a benchmark is not very relevant, well that is true in general cases but not all (actually its like having 5k resolution on a 4 inch device; but that is another story). In this post I am discussing the “special” cases where it does matter, focusing on mediump vs highp in fragment programs as well as vertex programs.
Since OpenGL ES specs loosely define the float precision requirements and leaves to the vendors the freedom of implementing it, resulting in having no two devices with same precision. To most applications this is not an issue, but dealing with large CAD data-sets and requiring consistent behavior on all devices such inconsistency becomes an issue especially that this relates directly to the performance.
Testing our application with a Nexus 10 device, that includes the new Mali-T604, encountered a weird bad quality of rendering (second image below). Weird since I had very high hopes for this new device, and the same impl is giving very good quality on its predecessor Mali-400 and on Tegra 3. Searching/experimenting to find hints to this issue, I came across a blog post by Stuart Russell which compares different devices regarding float precision; and a very informative post by Tom Olson describing the float precision on Mali-T604 and why it’s different (which gave the hint/answer to my problem) .
Problem: used to depend on mediump precision on all devices since highp was not available on some of the devices I tested; and since it is not performing well on others even if it says they do. After reading those posts decided to do a similar benchmark but on mediump with the available devices at hand . The shaders are similar to the ones described in the referenced posts, with one modification: color is passed from vertex shader to get an idea on the precision in vertex shaders as well.
Continue reading »
IFC is one of the oldest standards for data exchange within the AEC industry. It became one of the most supported exchange formats available. Using a standard which was started about ten years ago, brings with it obsolete ways for defining objects and exchange files. Example to that is usage of EXPRESS language to define the schema and use of ASCII as the format of the file.
While it is great to have a format that is being adopted by a growing list of publishers, issues with the schema and its implementations makes it hard to adopt and rely on. Below are some of the issues that I guess are mostly visible, and a possible solution to consider!
Highly Typed: The IFC schema (ISO 10303-21) has an object definition for each type of object defined in the AEC industry. This results in a huge schema; instead of having one type of data object. The attributes of the objects are based on the object type, thus producing a complex exporter and reader of the file. As a developer, the highly specific format of the objects makes the specs your best friend while writing a parser, reader or an exporter to this format.
Continue reading »
Will be presenting by the end of this month the paper behind the Resolution Independent Fonts, Curve, and UI rendering API added recently to jogl.
Title: Resolution Independent NURBS Curves Rendering using Programmable Graphics Pipeline
Non-Uniform Rational B-Splines (NURBS) are widely used,especially in the design and manufacturing industry, for their precision and ability to represent complex shapes. These properties come at the cost of being computationally expensive for rendering. Many methods have tackled NURBS rendering by view based approximations and/or heavy pre-processing. We present a method for resolution independent rendering of curves and shapes, deﬁned by NURBS, by utilizing the high parallelism of the programmable graphics hardware. The computation of the curve is processed directly on the GPU, without the need for complex pre-processing and/or additional storage of the basis functions as textures. Our method enables rendering of a complex NURBS shape in precise form, by deﬁning only the curve’s hull. We also present a method to enhance the performance of the preprocessing stage, mainly triangulation, that ﬁtsour requirements and speeds up the process. With opti-mized preprocessing and using only the mobile proﬁle of theprogrammable graphics pipeline, we achieve a fast and resolution independent method for rendering NURBS based 2d shapes on desktop and mobile devices.
Conference Program: http://gc2011.graphicon.ru/en/program/scientific#en4
Thanks to Sven Gothel and the Jogamp Community for all the fruitful discussions regarding this topic.
Hope to see you all there!
Update: Paper and slides published on Jogamp.org.
Once again siggraph was great… didnt get time to see Vancouver but it looked nice!
had the pleasure to co-present with Sven the latest development on Jogamp.org, mainly JOGL highlighting the newly added support for embedded devices (linux ARM and Android) and the
new Resolution independent Font and User Interface. In all, it was a great “journey” and the feedback we got was great as well.
The journey to Siggraph (the story)
JOGL Embedded Devices (the status)
The presentation well be posted to jogamp.org by next week, we need to add snapshots to it…etc)
The Pictures: (courtesy of Justin)
Continue reading »
Cross posting announcement with Sven Gothel.
Jogamp will be at Siggraph this year after the great success of last year’s BOF.
JogAmp: 2D/3D & Multimedia Across Devices
Tuesday, 9 August | 2:30 pm – 4:30 pm | Vancouver Convention Centre
JogAmp provides JOGL (OpenGL), JOCL (OpenCL) across devices on top of Java.
Showcasing Resolution Independent Curve, Font and UI GPU Rendering on desktop and mobile (Android, etc).
sgothel (at) jogamp.org
Our goal is to
- recapitulate last year progress and discuss our directions
- demonstrate JogAmp Modules (JOGL, JOCL, ..) on multiple devices, PC (Windows, Linux, ..) and Mobile (Android, Linux.
- showcase our new resolution Independent Curve rendering, utilized shapes, fonts and UI. We also like to discuss it’s usability and how to accomplish a complete UI toolkit
- showcase user contributions and applications / use-cases
If you would like to showcase some tool/demo that uses jogl please contact us asap.
all our results will be visible as usual via jogamp.org.
Hope to see you there….
One the goals behind the work done in GPU based Curve Rendering, was the ability to investigate the
possibilities regarding GPU based user interface in a 2D/3D scene – ontop of jogl.
As a test, pushed to repo a testcase for rendering a button with a label, using the Resolution Independent Region/Text Renderer.
Screenshots: (click to view) Continue reading »
Below is about the latest project done by Sven Gothel and myself, which is now published/merged in JogAmp – Jogl Project.
The story: At first we started this project as an idea, started with a chat with Sven on possible ways to by pass the Microsoft patent on GPU Curve Rendering. So after digging into the math of Loop/Blinn patent, devised a way to solve the problem with a different approach than the one used in the patent, the math behind the new approach will be published here soon (need to type them, currently on a notebook with too many scrabble).
After making sure the math works in theory, moved into developing a prototype for the algorithm. While doing this, developed a tessellation utility to provide Delaunay based triangulation of the curved outlines, mainly to avoid using a hack on GLU tessellation to get the triangles and not to have a slow solution. After having a working version of the demo, we went into the “harder” part, getting good quality AA on the ever so tiny fonts. After lots of discussions and fighting the small dimples in the curves, we were able to devise a method that provided the best output of all the others we tried and sharp quality Anti Aliasing.
At that point, we moved from building a prototype (proof of concept) to building an API so that we can add it to JOGL API, which was a great experience by it self, thinking of all the ways the user might wanna use it. We looked into making it more generic, usable, readable, and stable for a user. Not to forget making sure all the code is BSD license clean.
Yesterday, we reached to a point where we agreed that its “clean” and good for an initial public release, merge with jogl project. More than enjoying the results and outcomes the best part was the collaboration and the discussion held during the project development. In short it was a great experience, and lots of fun!
As for the tech part: first some new snapshots,
Continue reading »
In the past couple of weeks, I worked on developing a new technique to create an API for resolution independent curve bounded regions and text in general. The main goal of this project was to move out of the Microsoft Patent Loop/Blinn approach which uses Bezier (Quadratic/Cubic) curve rendering to solve the problem.
To by-pass the Microsoft patent, developed the functions and parameters based on nurbs blending functions. The result of this approach gives great results especially in the region of injection points which where a main concern as well. The next problem approached was Anti aliasing for the rendered curves especially targeting small fonts, since MSAA on small fonts will give a blurry non-sharp effect.
Below are some snapshots from demos created to use the new text/curve renderer API (where the algorithm is implemented). Later on, will publish the detailed algorithm with a review of the AA techniques used and tested. But first will work in the upcoming week or so to polish the API before pushing to jogl repository.
Continue reading »
Continuation of Git in Enterprise – part 1
Setting up a git as the source control management (scm) in a company is about setting up your development process. Git is powerful since it distributed in nature and gives each user the ability to change anything without worrying of the consequences. You might say no that wont be good for me, well sure it is unless you wanna get paranoid from the idea that a developer might have changed something to the one and only central repository.
So how to set it up, well there is no one way to do that but here ill describe a way that works well.
The above is a sample server scm setup. In the above illustration we have 5 repositories with different branches and the one in blue being the master branch. Each developer has the master branch (preferable) and the branches he/she is working on.
On the left, we have the central repository which is actually the master repository of the project. This repository holds the latest stable, merged, and tested code of all the development tracks (branches). You can assign a maintainer or couple of maintainers to handle merging into the master repository. Currently in git you cant specify access rights on branches (guess its a coming feature) so if you wanna assign a maintainer to a branch, verbal understanding should be enough.
On the right you see the developers (the hard-workers), each has the master branch and the branch(s) if any he/she is working on. These repositories have write access to the developer only and read to all others. Now that we have the setup, the process that brings this all togther might look like.
- Each developer clones the master repository and pushes to his repository
- Each developer fetches the branch he is to work on (if not master) and pushes it to his repository.
- Developers: commit changes to the code and push to his repository
- “Sr.” Developer: Review developer(s) changes, and merges the changes with his repo and pushes to his repository. Which the Developers pulls after the merge.
- Master repository Branch Maintainer: (could be one of the above) Review and pull changes after all looks good, and pushes to the central repository.
In this process, changes that don’t look good can be simply ignored, reverted, or followed with another commit. And when adding a continuous build server to the master repository branches you can insure that everything is always on the right track.
Credits: most of the ideas in this post are from discussions with Sven Gothel, and JogAmp.org setup.
3D Cloud Computing is becoming a topic of high interest and is actually very interesting for general visualization problems especially scientific visualization . Nivida picked that topic and is presenting RealityServer and its Tesla Servers as a solution that will in the near future revolutionize the industry with effects that will ripple across to other companies.
Nvidia’s solution being great and very powerful poses some questions on the approach to the topic. The RealityServer application provides a progressive JPEG image/s as a result which is rendered on the end device. This approach removes all computations from the end device and place it on the cloud.
This approach looks great for low end devices (gpu wise) , but what if you wanna mix it with a medium/high end device, you cant make use of the client device with this approach, which would be great from my point of view. I would like to view the 3D Cloud computing problem as an on demand usage of the cloud, and not as the sole computation handler.
The 3D Cloud Server, as is, requires a very high connection which is not always available and removes the device’s local computation power. The performance of such can not be tweaked if any of the requirements are not met or not consistent. As I see it, the best solution would be to have the client device be the master node and depending on its power, frame-rate, and connectivity call the cloud to render frames or a just a rendering pass. This creates the ability to provide a constant performance and quality of the application.