ICC Examin 1.0 on Android

ICC Examin allows since version 1.0 ICC Color Profile viewing on the Android mobile platform. ICC Examin shows ICC color profile elements graphically. This way it is much easier to understand the content. Color primaries, white point, curves, tables and color lists are displayed both numerically and as graphics. Matrices, international texts, Metadata are much easier to read.

* most profile elements from ICC specification version 2 and version 4
* additionally some widely used non standard tag are understood

ICC color profiles are used in photography, print and various operating systems for improving the visual appearance. A ICC profile describes the color response of a color device. Read more about ISO 15076-1:2010 Standard / Specification ICC.1:2010-12 (Profile version, color profiles and ICC color management under www.color.org .

The ICC Examin App is completely rewritten in Qt/QML. QML is a declarative language, making it easy to define GUI elements and write layouts with fewer code. In recent years the Qt project extended support from desktop platforms to mobiles like Nokias Meego, Sailfish OS, iOS, Android, embedded devices and more. ICC Examin is available as a paid app in the Google Play Store. Sources are currently closed in order to financially support further development. This ICC Examin version continues to use Oyranos CMS. New is the dependency to RefIccMAX for parsing ICC Profile binaries. In the process both the RefIccMAX library and the Oyranos Color Management System obtained changes and fixes in git for cross compilation with Android libraries. Those changes will be in the next respective releases.

The FLTK Toolkit, as used in previous versions, was not ported to the Android or other mobile platforms. Thus a complete rewrite was unavoidable. The old FLTK based version is still maintained by the same author.

Install openSUSE Tumbleweed + KDE on MacBook 2015

It is pretty easy to install openSUSE Linux on a MacBook as operating system. However there are some pitfalls, which can cause trouble. The article gives some hints about a dual boot setup with OS X 10.10 and at time of writing current openSUSE Tumbleweed 20170104 (oS TW) on a MacBookPro from early 2015. A recent Linux kernel, like in TW, is advisable as it provides better hardware support.

The LiveCD can be downloaded from www.opensuse.org and written with ImageWriter GUI to a USB stick ~1GB. I’ve choose the Live KDE one and it run well on a first test. During boot after the first sound and display light switches on hold Option/alt key and wait for the disk selection icon. Put the USB key with Linux in a USB port and wait until the removable media icon appears and select it for boot. For me all went fine. The internal display, sound, touchpad and keyboard where detected and worked well. After that test. It was a good time to backup all data from the internal flash drive. I wrote a compressed disk image to a stick using the unix dd command. With that image and the live media I was able to recover, in case anything went wrong. It is not easy to satisfy OS X for it’s journaled HFS and the introduced logical volume layout, which comes with a separate repair partition directly after the main OS partition. That combination is pretty fragile, but should not be touched. The rescue partition can be booted with the command key + r pressed. External tools failed for me. So I booted into rescue mode and took the OS X diskutil or it’s Disk Utility GUI counter part. The tool allows to split the disk into several partitions. The EFI and the rescue ones are hidden in the GUI. The newly created additional partitions can be formatted to exfat and later be modified for the Linux installation. One additional HFS partition was created for sharing data between OS X and Linux with the comfortable Unix attributes. The well know exfat used by many bigger USB sticks, is a possible option as well, but needs the exfat-kmp kernel module installed, which is not by default installed due to Microsofts patent license policy for the file system. In order to write to HFS from Linux, any HFS partition must have switched off the journal feature. This can be done inside the OS X Disk Utility GUI, by selecting the data partition and holding the alt key and searching in the menu for the disable journaling entry. After rebooting into the Live media, I clicked on the Install icon on the desktop background and started openSUSE’s Yast tool. Depending on the available space, it might be a good idea to disable the btrfs filesystem snapshot feature, as it can eat up lots of disk space during each update. An other pitfall is the boot stage. Select there secure GrubEFI mode, as Grub needs special handling for the required EFI boot process. That’s it. Finish install and you should be able to reboot into Linux with the alt key.

My MacBook has unfortunedly a defect. It’s Boot Manager is very slow. Erasing and reinstalling OS X did not fix that issue. To circumvent it, I need to reset NVRAM by pressing alt+cmd+r+p at boot start for around 14 second, until the display gets dark, hold alt on the soon comming next boot sound, select the EFI TW disk in Apple Boot Manager and can then fluently go through the boot process. Without that extra step, the keyboard and mouse might not respond in Linux at all, except the power button. Hot reboot from Linux works fine. OS X does a cold reboot and needs the extra sequence.

KDE’s Plasma needs some configuration to run properly on a high resolution display. Otherwise additional monitors can be connected and easily configured with the kscreen SystemSettings module. Hibernate works fine. Currently the notebooks SD slot is ignored and the facetime camera has no ready oS packages. Battery run time can be extended by spartan power consumption (less brightness, less USB devices and pulseaudio -k, check with powertop), but is not too far from OS X anyway.

Watching org.libelektra with Qt

libelektra is a configuration library and tools set. It provides very many capabilities. Here I’d like to show how to observe data model changes from key/value manipulations outside of the actual application inside a user desktop. libelektra broadcasts changes as D-Bus messages. The Oyranos projects will use this method to sync the settings views of GUI’s, like qcmsevents, Synnefo and KDE’s KolorManager with libOyranos and it’s CLI tools in the next release.

Here a small example for connecting the org.libelektra interface over the QDBusConnection class with a class callback function:

Declare a callback function in your Qt class header:

public slots:
 void configChanged( QString msg );

Add the QtDBus API in your sources:

#include <QtDBus/QtDBus>

Wire the org.libelektra intereface to your callback in e.g. your Qt classes constructor:

if( QDBusConnection::sessionBus().connect( QString(), "/org/libelektra/configuration", "org.libelektra", QString(),
 this, SLOT( configChanged( QString ) )) )
 fprintf(stderr, "=================== Done connect\n" );

In your callback arrive the org.libelektra signals:

void Synnefo::configChanged( QString msg )
 fprintf( stdout, "config changed: %s\n", msg.toLocal8Bit().data() );

As the number of messages are not always known, it is useful to take the first message as a ping and update with a small timeout. Here a more practical code elaboration example:

// init a gate keeper in the class constructor:
acceptDBusUpdate = true;

void Synnefo::configChanged( QString msg )
  // allow the first message to ping
  if(acceptDBusUpdate == false) return;
  // block more messages
  acceptDBusUpdate = false;

  // update the view slightly later and avoid trouble
  QTimer::singleShot(250, this, SLOT( update() ));

void Synnefo::update()
  // clear the Oyranos settings cache (Oyranos CMS specific)
  oyGetPersistentStrings( NULL );

  // the data model reading from libelektra and GUI update
  // code ...

  // open the door for more messages to come
  acceptDBusUpdate = true;

The above code works for both Qt4 and Qt5.

Responsive HTML with CSS and Javscript

In this article you can learn how to make a minimalist web page readable on different format readers like larger desktop screens and handhelds. The ingredients are HTML, with CSS and few JavaScript. The goals for my home page are:

  • most of the layout resides in CSS in a stateless way
  • minimal JavaScript
  • on small displays – single column layout
  • on wide format displays – division of text in columns
  • count of columns adapts to browser window width or screen size
  • combine with markdown


h1,h2,h3 {
  font-weight: bold;
  font-style: normal;

@media (min-width: 1000px) {
  .tiles {
    display: flex;
    justify-content: space-between;
    flex-wrap: wrap;
    align-items: flex-start;
    width: 100%;
  .tile {
    flex: 0 1 49%;
  .tile2 {
    flex: 1 280px
  h1,h2,h3 {
    font-weight: normal;
@media (min-width: 1200px) {
  @supports ( display: flex ) {
    .tile {
      flex: 0 1 24%;

The content in class=”tile” is shown as one column up to 4 columns. tile2 has a fixed with and picks its column count by itself. All flex boxes behave like one normal column. With @media (min-width: 1000px) { a bigger screen is assumed. Very likely there is a overlapping width for bigger handhelds, tablets and smaller laptops. But the layout works reasonable and performs well on shrinking the web browser on a desktop or viewing fullscreen and is well readable. Expressing all tile stuff in flex: syntax helps keeping compatibility with non flex supporting layout engines like in e.g. dillo.

For reading on High-DPI monitors on small it is essential to set font size properly. Update: Google and Mozilla recommend a meta “viewport” tag to signal browsers, that they are prepared to handle scaling properly. No JavaScript is needed for that.

<meta name="viewport" content="width=device-width, initial-scale=1.0">

[Outdated: I found no way to do that in CSS so far. JavaScript:]

function make_responsive () {
  if( typeof screen != "undefined" ) {
    var fontSize = "1rem";
    if( screen.width < 400 ) {
      fontSize = "2rem";
    else if( screen.width < 720 ) {
      fontSize = "1.5rem";
    else if( screen.width < 1320 ) {
      fontSize = "1rem";
    if( typeof document.children === "object" ) {
      var obj = document.children[0]; // html node
      obj.style["font-size"] = fontSize;
    } else if( typeof document.body != "undefined" ) {
      document.body.style.fontSize = fontSize;
document.addEventListener( "DOMContentLoaded", make_responsive, false );
window.addEventListener( "orientationchange", make_responsive, false );

[The above JavaScript checks carefully if various browser attributes and scales the font size to compensate for small screens and make it readable.]

The above method works in all tested browsers (FireFox, Chrome, Konqueror, IE) beside dillo and on all platforms (Linux/KDE, Android, WP8.1). The meta tag method works as well better for printing.

Below some markdown to illustrate the approach.


<div class="tiles">
<div class="tile"> My first text goes here. </div>
<div class="tile"> Second text goes here. </div>
<div class="tile"> Third text goes here. </div>
<div class="tile"> Fourth text goes here. </div>

In my previous articles you can read about using CSS3 for Translation and Web Open Font Format (WOFF) for Web Documents.

CSS3 for Translation

Years ago I used a CMS to bring content to a web page. But with evolving CSS, markdown syntax and comfortable git hosting, publication of smaller sites can be handled without a CMS. My home page is translated. Thus I liked to express page translations in a stateless language. The ingredients are simple. My requirements are:

  • stateless CSS, no javascript
  • integrable with markdown syntax (html tags are ok’ish)
  • default language shall remain visible, when no translation was found
  • hopefully searchable by robots (Those need to understand CSS.)


/* hide translations initially */
.hide {
  display: none
/* show a browser detected translation */
:lang(de) { display: block; }
li:lang(de) { display: list-item; }
a:lang(de) { display: inline; }
em:lang(de) { display: inline; }
span:lang(de) { display: inline; }

/* hide default language, if a translation was found */
:lang(de) ~ [lang=en] {
 display: none;

The CSS uses the display property of the element, which was returned by the :lang() selector. However the selectors for different display: types are somewhat long. Which is not so short as I liked.


<span lang="de" class="hide"> Hallo _Welt_. </span>
<span lang="en"> Hello _World_. </span>

Even so the plain markdown text looks not as straight forward as before. But it is acceptable IMO.

Hiding the default language uses the sibling elements combinator E ~ F and selects a element containing the lang=”en” attribute. Matching elements are hidden (display: none;). This is here the default language string “Hello _World_.” with the lang=”en” attribute. This approach works fine in FireFox(49), Chrome Browser(54), Konqueror(4.18 khtml&WebKit) and WP8.1 with Internet Explorer. Dillo(3.0.5) does not show the translation, only the english text, which is correct as fallback for a non :lang() supporting engine.

On my search I found approaches for content swapping with CSS: :lang()::before { content: xxx; } . But those where not well accessible. Comments and ideas welcome.

Digital diaphragm for optical lenses

In photography most optical lenses use mechanical diaphragms for aperture control. They are traditionally manufactured from metal blades and works quite good. However metal blades exposes some disadvantages:

  • mechanical parts will sooner or later fail
  • the cheaper forms give strong diffraction spikes
  • manufacturers need more metal blades for a round iris, which is expensive
  • a metal blade with its sharp edges give artefacts, which are visible in out of focus regions.
  • but, contrast is very high by using opaque metal

In order to obtain a better bokeh, some lenses are equipped with apodization filters. Those filters work mostly for fully open arperture and are very specialised and thus relatively expensive.

A digital arperture build as a transparent display with enough spatial resolution can not only improve the shape of the diaphragm. It could feature as a apodisation filter, if it supports enough gray levels. And it can change its form programatically.

Two possible digital diaphragm forms:

  • leverage existing display technology
  • better aperture shape for reduced artefacts
  • apodisation filter on demand for best bokeh or faster light
  • programmable or at least updateable aperture pattern (sharp/gausian/linear/…)
  • no metal blades or other mechanical parts to fail
  • over the years get cheaper than mechanical counterpart
  • reduce number of glas to air surfaces in optical lens design
  • integratable aperture into lens groups
  • display transparency increases quickly and is for OLED at 45% by 2016, which means at the moment just one f-stop
  • mobile demands high display resolutions anyway

The digital arperture can easily be manufactured as a monochrome display and be placed traditionally between two optical lens groups, where today the diaphragm is located. Or it is even possible to optically integrate the aperture into one lens group, without additional glas to air surfaces, as is needed with moving blades. Once the optical quality of the digital filter display is good enough a digital diaphragm can be even cheaper than a high quality mechanical counterpart.

<< relearned <<|

Bit shifting, done by the << and >> operators, allows in C languages to express memory and storage access, which is quite important to read and write exchangeable data. But:

Question: where is the bit when moved shifted left?

Answere: it depends

Long Answere:

// On LSB/intel a left shift makes the bit moving 
// to the opposite, the right. Omg 
// The shift(<<,>>) operators follow the MSB scheme
// with the highest value left. 
// shift math expresses our written order.
// 10 is more than 01
// x left shift by n == x << n == x * pow(2,n)
#include <stdio.h> // printf
#include <stdint.h> // uint16_t

int main(int argc, char **argv)
  uint16_t u16, i, n;
  uint8_t * u8p = (uint8_t*) &u16; // uint16_t as 2 bytes
  // iterate over all bit positions
  for(n = 0; n < 16; ++n)
    // left shift operation
    u16 = 0x01 << n;
    // show the mathematical result
    printf("0x01 << %u:\t%d\n", n, u16);
    // show the bit position
    for(i = 0; i < 16; ++i) printf( "%u", u16 >> i & 0x01);
    // show the bit location in the actual byte
    for(i = 0; i < 2; ++i)
        printf(" byte[%d]", i); 
  return 0;

Result on a LSB/intel machine:

0x01 << 0:      1 
1000000000000000 byte[0] 
0x01 << 1:      2 
0100000000000000 byte[0] 
0x01 << 2:      4 
0010000000000000 byte[0] 
0x01 << 3:      8 
0001000000000000 byte[0]

<< MSB Moves bits to the left, while << LSB is a Lie and moves to the right. For directional shifts I would like to use a separate operator, e.g. <<| and |>>.

High DPI with FLTK

After switchig to a notebook with higher resolution monitor, I noticed, that the FLTK based ICC Examin application looked way too small. Having worked in the last months much with pixel independent resolutions in QML, it was a pain to see the non adapted FLTK GUI. I had the impression, that despite of several years of a very appreciated advancement of monitor technology, some parts of graphics stacks did not move and take advantage. So I became curious on how to solve high DPI support the hard way.

First of all a bit of introduction to my environment, which is openSUSE Linux and KDE-5 with KF5 5.5.3. Xorg use many times a hardcoded default of 96 dpi, which is very unfortune or just a bug? Anyway, KDE follows X11. So the desktop on the high resolution monitor looks initially as bad as any application. All windows, icons and text are way too small to be useable. In KDE’s system settings, I had to set Force Font with DPI and doubled its size from 96 to 192. In the kscreen module I had to set scale 2.0 and then increased the KDE task bars width. Out of the box useability is bad with so many inconsistent manual user intervention. In comparision with the as well tested Unity DE, I had to set a single display scaling factor to 2.0 and everything worked fine and instantly, icons, fonts and window sizes. It would be cool if DE’s and Xorg understand screen resolution. In the tested OS X 10.10 even different screen resolutions of multiple monitors are scaled correctly, so moving a window from a high DPI monitor screen to a traditional low resolution external monitor gives reasonable physical GUI rendering. Apples OS X provides that good behaviour initially, without manual user intervention. It would be interessting how GNOME behaves with regards to display scaling.

Back to FLTK. As FLTK appears to define itself as pixel based, DPI detecion or settings have no effect in FLTK. As a app developer I want to improve user experience and modified first ICC Examin to initially render physically reasonably. First I looked at the FL::screen_dpi() function. It is only a helper for detecting DPI values. FL::screen_dpi() has has in FLTK-1.3.3 hardcoded values of 96DPI under Linux. I noticed XRandR provides correct milimeter based screen dimensions. Together with the XRandR provided screen resolution, it is easy to calculate the DPI values. ICC Examin renderd much better with that XRandR based DPI’s instead of FLTK’s 96DPI. But ICC Examin looked slightly too big. The 192DPI set in KDE are lower than the XRandR detected 227 DPI of my notebooks monitor. KDE provides its forced DPI setting to applications by setting Xft.dpi in XResources. That way all Xft based applications should have the same basic font scaling. KDE and Mozilla apps do use Xft. So add parsing of a Xlib XResources solved that for ICC Examin. The remainder of programing was to programatically scale FLTK’s default font size from 14 pixels with: FL_NORMAL_SIZE = scale(14) . Some more widget sizes, the FtGl font sizes for OpenGL, drawing line widths and graphics scaling where needed. After all those changes, ICC Examin takes now advantage of high resolution rendering inside KDE. Testing under Windows and OS X must follow.

The way to program high DPI support into a FLTK application was basically the same as in QML. However Qt’s QML takes off more tasks by providing a relative font unit, much like CSS em sizes. For FLTK, I would like to see some relative based API’s, in addition to the pixel based API’s. That would be helpful to write more elegant code and integrate with FLTK’s GUI layout program fluid. Computer times point more and more toward W3C technology. FLTK would be helpful to follow.

Web Open Font Format (WOFF) for Web Documents

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF’s, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

<style type=”text/css”>
@font-face {
font-family: “Aladin”;
src: url(“fonts/Aladin-Regular.woff”) format(“woff”);

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape’s file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.

Portable Float Map with 16-bit Half

Recently we saw some lively discussions about support of Half within the Tiff image format on the OpenEXR mailing list. That made me aware of the according oyHALF code paths inside Oyranos. In order to test easily, Oyranos uses the KISS format PPM. That comes with a three ascii lines header and then the uncompressed pixel data. I wanted to create some RGB images containing 16-bit floating point half channels, but that PFM format variant is not yet defined. So here comes a RFC.

A portable float map (PFM) starts with the first line identifier “Pf” or “PF” and contains 32-bit IEEE floating point data. The 16-bit IEEE/Nvidia/OpenEXR floating point data variant starts with a first line “Ph” or “PH” magic similar to PFM. “Ph” stands for grayscale with one sample. The “PH” identifier is used for RGB with three samples.

That’s it. Oyranos supports the format in git and maybe in the next 0.9.6 release.