Watching org.libelektra with Qt

libelektra is a configuration library and tools set. It provides very many capabilities. Here I’d like to show how to observe data model changes from key/value manipulations outside of the actual application inside a user desktop. libelektra broadcasts changes as D-Bus messages. The Oyranos projects will use this method to sync the settings views of GUI’s, like qcmsevents, Synnefo and KDE’s KolorManager with libOyranos and it’s CLI tools in the next release.

Here a small example for connecting the org.libelektra interface over the QDBusConnection class with a class callback function:

Declare a callback function in your Qt class header:

public slots:
 void configChanged( QString msg );

Add the QtDBus API in your sources:

#include <QtDBus/QtDBus>

Wire the org.libelektra intereface to your callback in e.g. your Qt classes constructor:

if( QDBusConnection::sessionBus().connect( QString(), "/org/libelektra/configuration", "org.libelektra", QString(),
 this, SLOT( configChanged( QString ) )) )
 fprintf(stderr, "=================== Done connect\n" );

In your callback arrive the org.libelektra signals:

void Synnefo::configChanged( QString msg )
{
 fprintf( stdout, "config changed: %s\n", msg.toLocal8Bit().data() );
};

As the number of messages are not always known, it is useful to take the first message as a ping and update with a small timeout. Here a more practical code elaboration example:

// init a gate keeper in the class constructor:
acceptDBusUpdate = true;

void Synnefo::configChanged( QString msg )
{
  // allow the first message to ping
  if(acceptDBusUpdate == false) return;
  // block more messages
  acceptDBusUpdate = false;

  // update the view slightly later and avoid trouble
  QTimer::singleShot(250, this, SLOT( update() ));
};

void Synnefo::update()
{
  // clear the Oyranos settings cache (Oyranos CMS specific)
  oyGetPersistentStrings( NULL );

  // the data model reading from libelektra and GUI update
  // code ...

  // open the door for more messages to come
  acceptDBusUpdate = true;
}

The above code works for both Qt4 and Qt5.

Responsive HTML with CSS and Javscript

In this article you can learn how to make a minimalist web page readable on different format readers like larger desktop screens and handhelds. The ingredients are HTML, with CSS and few JavaScript. The goals for my home page are:

  • most of the layout resides in CSS in a stateless way
  • minimal JavaScript
  • on small displays – single column layout
  • on wide format displays – division of text in columns
  • count of columns adapts to browser window width or screen size
  • combine with markdown

CSS:

h1,h2,h3 {
  font-weight: bold;
  font-style: normal;
}

@media (min-width: 1000px) {
  .tiles {
    display: flex;
    justify-content: space-between;
    flex-wrap: wrap;
    align-items: flex-start;
    width: 100%;
  }
  .tile {
    flex: 0 1 49%;
  }
  .tile2 {
    flex: 1 280px
  }
  h1,h2,h3 {
    font-weight: normal;
  }
}
@media (min-width: 1200px) {
  @supports ( display: flex ) {
    .tile {
      flex: 0 1 24%;
    }
  }
}

The content in class=”tile” is shown as one column up to 4 columns. tile2 has a fixed with and picks its column count by itself. All flex boxes behave like one normal column. With @media (min-width: 1000px) { a bigger screen is assumed. Very likely there is a overlapping width for bigger handhelds, tablets and smaller laptops. But the layout works reasonable and performs well on shrinking the web browser on a desktop or viewing fullscreen and is well readable. Expressing all tile stuff in flex: syntax helps keeping compatibility with non flex supporting layout engines like in e.g. dillo.

For reading on High-DPI monitors on small it is essential to set font size properly. I found no way to do that in CSS so far.

JavaScript:

function make_responsive () {
  if( typeof screen != "undefined" ) {
    var fontSize = "1rem";
    if( screen.width < 400 ) {
      fontSize = "2rem";
    }
    else if( screen.width < 720 ) {
      fontSize = "1.5rem";
    }
    else if( screen.width < 1320 ) {
      fontSize = "1rem";
    }
    if( typeof document.children === "object" ) {
      var obj = document.children[0]; // html node
      obj.style["font-size"] = fontSize;
    } else if( typeof document.body != "undefined" ) {
      document.body.style.fontSize = fontSize;
    }
  }
}
document.addEventListener( "DOMContentLoaded", make_responsive, false );
window.addEventListener( "orientationchange", make_responsive, false );

The above JavaScript checks carefully if various browser attributes and scales the font size to compensate for small screens and make it readable. It works in all tested browsers (FireFox, Chrome, Konqueror, IE) beside dillo and on all platforms (Linux/KDE, Android, WP8.1).

Below some markdown to illustrate the approach.

HTML:

<div class="tiles">
<div class="tile"> My first text goes here. </div>
<div class="tile"> Second text goes here. </div>
<div class="tile"> Third text goes here. </div>
<div class="tile"> Fourth text goes here. </div>
</div>

In my previous articles you can read about using CSS3 for Translation and Web Open Font Format (WOFF) for Web Documents.

CSS3 for Translation

Years ago I used a CMS to bring content to a web page. But with evolving CSS, markdown syntax and comfortable git hosting, publication of smaller sites can be handled without a CMS. My home page is translated. Thus I liked to express page translations in a stateless language. The ingredients are simple. My requirements are:

  • stateless CSS, no javascript
  • integrable with markdown syntax (html tags are ok’ish)
  • default language shall remain visible, when no translation was found
  • hopefully searchable by robots (Those need to understand CSS.)

CSS:

/* hide translations initially */
.hide {
  display: none
}
/* show a browser detected translation */
:lang(de) { display: block; }
li:lang(de) { display: list-item; }
a:lang(de) { display: inline; }
em:lang(de) { display: inline; }
span:lang(de) { display: inline; }

/* hide default language, if a translation was found */
:lang(de) ~ [lang=en] {
 display: none;
}

The CSS uses the display property of the element, which was returned by the :lang() selector. However the selectors for different display: types are somewhat long. Which is not so short as I liked.

Markdown:

<span lang="de" class="hide"> Hallo _Welt_. </span>
<span lang="en"> Hello _World_. </span>

Even so the plain markdown text looks not as straight forward as before. But it is acceptable IMO.

Hiding the default language uses the sibling elements combinator E ~ F and selects a element containing the lang=”en” attribute. Matching elements are hidden (display: none;). This is here the default language string “Hello _World_.” with the lang=”en” attribute. This approach works fine in FireFox(49), Chrome Browser(54), Konqueror(4.18 khtml&WebKit) and WP8.1 with Internet Explorer. Dillo(3.0.5) does not show the translation, only the english text, which is correct as fallback for a non :lang() supporting engine.

On my search I found approaches for content swapping with CSS: :lang()::before { content: xxx; } . But those where not well accessible. Comments and ideas welcome.

Digital diaphragm for optical lenses

In photography most optical lenses use mechanical diaphragms for aperture control. They are traditionally manufactured from metal blades and works quite good. However metal blades exposes some disadvantages:

  • mechanical parts will sooner or later fail
  • the cheaper forms give strong diffraction spikes
  • manufacturers need more metal blades for a round iris, which is expensive
  • a metal blade with its sharp edges give artefacts, which are visible in out of focus regions.
  • but, contrast is very high by using opaque metal

In order to obtain a better bokeh, some lenses are equipped with apodization filters. Those filters work mostly for fully open arperture and are very specialised and thus relatively expensive.

A digital arperture build as a transparent display with enough spatial resolution can not only improve the shape of the diaphragm. It could feature as a apodisation filter, if it supports enough gray levels. And it can change its form programatically.

Two possible digital diaphragm forms:
Kreise

  • leverage existing display technology
  • better aperture shape for reduced artefacts
  • apodisation filter on demand for best bokeh or faster light
  • programmable or at least updateable aperture pattern (sharp/gausian/linear/…)
  • no metal blades or other mechanical parts to fail
  • over the years get cheaper than mechanical counterpart
  • reduce number of glas to air surfaces in optical lens design
  • integratable aperture into lens groups
  • display transparency increases quickly and is for OLED at 45% by 2016, which means at the moment just one f-stop
  • mobile demands high display resolutions anyway

The digital arperture can easily be manufactured as a monochrome display and be placed traditionally between two optical lens groups, where today the diaphragm is located. Or it is even possible to optically integrate the aperture into one lens group, without additional glas to air surfaces, as is needed with moving blades. Once the optical quality of the digital filter display is good enough a digital diaphragm can be even cheaper than a high quality mechanical counterpart.

<< relearned <<|

Bit shifting, done by the << and >> operators, allows in C languages to express memory and storage access, which is quite important to read and write exchangeable data. But:

Question: where is the bit when moved shifted left?

Answere: it depends

Long Answere:

// On LSB/intel a left shift makes the bit moving 
// to the opposite, the right. Omg 
// The shift(<<,>>) operators follow the MSB scheme
// with the highest value left. 
// shift math expresses our written order.
// 10 is more than 01
// x left shift by n == x << n == x * pow(2,n)
#include <stdio.h> // printf
#include <stdint.h> // uint16_t

int main(int argc, char **argv)
{
  uint16_t u16, i, n;
  uint8_t * u8p = (uint8_t*) &u16; // uint16_t as 2 bytes
  // iterate over all bit positions
  for(n = 0; n < 16; ++n)
  {
    // left shift operation
    u16 = 0x01 << n;
    // show the mathematical result
    printf("0x01 << %u:\t%d\n", n, u16);
    // show the bit position
    for(i = 0; i < 16; ++i) printf( "%u", u16 >> i & 0x01);
    // show the bit location in the actual byte
    for(i = 0; i < 2; ++i)
      if(u8p[i]) 
        printf(" byte[%d]", i); 
    printf("\n");
  }
  return 0;
}

Result on a LSB/intel machine:

0x01 << 0:      1 
1000000000000000 byte[0] 
0x01 << 1:      2 
0100000000000000 byte[0] 
0x01 << 2:      4 
0010000000000000 byte[0] 
0x01 << 3:      8 
0001000000000000 byte[0]
...

<< MSB Moves bits to the left, while << LSB is a Lie and moves to the right. For directional shifts I would like to use a separate operator, e.g. <<| and |>>.

High DPI with FLTK

After switchig to a notebook with higher resolution monitor, I noticed, that the FLTK based ICC Examin application looked way too small. Having worked in the last months much with pixel independent resolutions in QML, it was a pain to see the non adapted FLTK GUI. I had the impression, that despite of several years of a very appreciated advancement of monitor technology, some parts of graphics stacks did not move and take advantage. So I became curious on how to solve high DPI support the hard way.

First of all a bit of introduction to my environment, which is openSUSE Linux and KDE-5 with KF5 5.5.3. Xorg use many times a hardcoded default of 96 dpi, which is very unfortune or just a bug? Anyway, KDE follows X11. So the desktop on the high resolution monitor looks initially as bad as any application. All windows, icons and text are way too small to be useable. In KDE’s system settings, I had to set Force Font with DPI and doubled its size from 96 to 192. In the kscreen module I had to set scale 2.0 and then increased the KDE task bars width. Out of the box useability is bad with so many inconsistent manual user intervention. In comparision with the as well tested Unity DE, I had to set a single display scaling factor to 2.0 and everything worked fine and instantly, icons, fonts and window sizes. It would be cool if DE’s and Xorg understand screen resolution. In the tested OS X 10.10 even different screen resolutions of multiple monitors are scaled correctly, so moving a window from a high DPI monitor screen to a traditional low resolution external monitor gives reasonable physical GUI rendering. Apples OS X provides that good behaviour initially, without manual user intervention. It would be interessting how GNOME behaves with regards to display scaling.

Back to FLTK. As FLTK appears to define itself as pixel based, DPI detecion or settings have no effect in FLTK. As a app developer I want to improve user experience and modified first ICC Examin to initially render physically reasonably. First I looked at the FL::screen_dpi() function. It is only a helper for detecting DPI values. FL::screen_dpi() has has in FLTK-1.3.3 hardcoded values of 96DPI under Linux. I noticed XRandR provides correct milimeter based screen dimensions. Together with the XRandR provided screen resolution, it is easy to calculate the DPI values. ICC Examin renderd much better with that XRandR based DPI’s instead of FLTK’s 96DPI. But ICC Examin looked slightly too big. The 192DPI set in KDE are lower than the XRandR detected 227 DPI of my notebooks monitor. KDE provides its forced DPI setting to applications by setting Xft.dpi in XResources. That way all Xft based applications should have the same basic font scaling. KDE and Mozilla apps do use Xft. So add parsing of a Xlib XResources solved that for ICC Examin. The remainder of programing was to programatically scale FLTK’s default font size from 14 pixels with: FL_NORMAL_SIZE = scale(14) . Some more widget sizes, the FtGl font sizes for OpenGL, drawing line widths and graphics scaling where needed. After all those changes, ICC Examin takes now advantage of high resolution rendering inside KDE. Testing under Windows and OS X must follow.

The way to program high DPI support into a FLTK application was basically the same as in QML. However Qt’s QML takes off more tasks by providing a relative font unit, much like CSS em sizes. For FLTK, I would like to see some relative based API’s, in addition to the pixel based API’s. That would be helpful to write more elegant code and integrate with FLTK’s GUI layout program fluid. Computer times point more and more toward W3C technology. FLTK would be helpful to follow.

Web Open Font Format (WOFF) for Web Documents

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF’s, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

</svg>
<style type=”text/css”>
@font-face {
font-family: “Aladin”;
src: url(“fonts/Aladin-Regular.woff”) format(“woff”);
}
</style>

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape’s file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.

Portable Float Map with 16-bit Half

Recently we saw some lively discussions about support of Half within the Tiff image format on the OpenEXR mailing list. That made me aware of the according oyHALF code paths inside Oyranos. In order to test easily, Oyranos uses the KISS format PPM. That comes with a three ascii lines header and then the uncompressed pixel data. I wanted to create some RGB images containing 16-bit floating point half channels, but that PFM format variant is not yet defined. So here comes a RFC.

A portable float map (PFM) starts with the first line identifier “Pf” or “PF” and contains 32-bit IEEE floating point data. The 16-bit IEEE/Nvidia/OpenEXR floating point data variant starts with a first line “Ph” or “PH” magic similar to PFM. “Ph” stands for grayscale with one sample. The “PH” identifier is used for RGB with three samples.

That’s it. Oyranos supports the format in git and maybe in the next 0.9.6 release.

Reanimation of MacBook Air

For some months our MacBook Air was broken. Finally good time to replace, I thought. On the other side, the old notebook was quite useful even 6 years after purchasing. Coding on the road, web surfing, SVG/PDF presentations and so on worked fine on the Core2Duo device from 2008. The first breaking symptoms started with video errors on a DVI connected WUXGA/HDTV+ sized display. The error looked like non stable frequency handling, with the upper scan lines being visually ok and the lower end wobbling to the right. A black desktop background with a small sized window was sometimes a workaround. This notebook type uses a Nvidia 9400M on the logic board. Another non portable computer of mine which uses Nvidia 9300 Go on board graphics runs without such issues. So I expected no reason to worry about the type of graphics chip. Later on, the notebook stopped completely, even without attached external display. It showed a well known one beep every 5 seconds during startup. On MacBook Pro/Air’s this symptom means usually broken RAM.

The RAM is soldered directly on the logic board. Replacing @ Apple appeared prohibitive. Now that I began to look around to sell the broken hardware to hobbyists, I found an article talking about these early MacBook Air’s. This specific one is a 2.1 rev A 2.13 GHz. It was mentioned, that early devices suffered from lead-free soldering, which performs somewhat worse in regards to ductility than normal soldering. The result was that many of these devices suffered from electrical disconnections of its circuitry during the course of warming and cooling and the related thermal expansion and contraction. The device showed the one beep symptom on startup without booting. An engineer from Apple was unofficially cited to suggest, that putting the logic board in around 100° Celsius for a few minutes would eventually suffice to solve the issue. That sounded worth a try to me. As I love to open up many devices to look into and eventually repair them, taking my time for dismounting the logic board and not bringing it to a repair service was fine for me. But be warned, doing so can be difficult for beginners. I placed the board on some wool in the oven @120 ° and after 10 minutes and some more for montage, the laptop started again to work. I am not sure if soldering is really solved now or if the experienced symptoms will come back. I guess that some memory chips on the board were resetted and stopped telling that RAM is broken. So my device works again and will keep us happy for a while – I hope.

Graphical profiling under Linux

The Oyranos library became quite slower during the last development cycle for 0.9.6 . That is pretty normal, as new features were added and more ideas waited for implementation letting not much room for all details as wanted. The last two weeks, I took a break and mainly searched for bottlenecks inside the code base and wanted to bring performance back to satisfactory levels. One good starting point for optimisations in Oyranos are the speed tests inside the test suite. But that gives only help on starting a few points. What I wished to be easy, is seeing where code paths spend lots of time and perhaps, which line inside the source file takes much computation time.

I knew from old days the oprofile suite. So I installed it on my openSUSE machine, but had not much success to get callgraphs working. The web search for “Linux profiling” brought me to a article on pixel beat and to perf. I found the article very informative and do not want to duplicate it here. The perf tools are impressive. The sample recording needs to run as root. On the other hand the obtained sample information is quite useful. Most tools of perf are text based. So getting to the hot spots is not straight forward for my taste. However the pixel beat site names a few graphical data representations, and has a screenshot of kcachegrind. The last link under misc guides to flame graphs. The flame graphs are amazing representations of what happens inside Oyranos performance wise. They show in a very intuitive way, which code paths take most time. The graphs are zoom able SVG.

Here an example with expensive hash computation and without in oyranos-profiles:

Computation time has much reduced. An other bottleneck was expensive DB access. I talked with Markus about that already some time ago but forgot to implement. The according flame graph reminded me about that open issue. After some optimisation the DB bottleneck is much reduced.

The command to create the data is:

root$ perf record -g my-command

user& perf-flame-graph.sh my-command-graph-title

… with perf-flame-graph.sh somewhere in your path:

#!/bin/sh
path=/path/to/FlameGraph
output=”$1″
if [ "$output" = "" ]; then
output=”perf”
fi

 

perf script | $path/stackcollapse-perf.pl > $TMPDIR/$USER-out.perf-folded
$path/flamegraph.pl $TMPDIR/$USER-out.perf-folded > $TMPDIR/$USER-$output.svg
firefox $TMPDIR/$USER-$output.svg

One needs FlameGraph, a set of perl script, installed and perf. The above script is just a typing abbreviation.