The Document Foundation Planet


March 26, 2015

Caolán McNamara

gtk3 vclplug, some more gesture support

Now gtk3 long-press support to go with swipe

With the demo that a long-press in presentation mode will bring up the context menu for switching between using the pointer for draw-on-slide vs normal slide navigation.

by Caolán McNamara ( at March 26, 2015 02:53 PM

Michael Meeks

2015-03-26 Thursday

  • Mihai posted a nice blog with a small video of LibreOffice Online in action - hopefully we'll have a higher-resoluton version that doesn't feature some bearded idiot next time.
  • Out to the Dentist for some drilling action.

March 26, 2015 11:00 AM

User Prompt

Libreoffice Design Session: Special Character

The Libreoffice UX team discussed possible improvements for the dialog to insert special characters, in particular the feature of recently used items. But today we have more than one solution since the current dialog would still be technology-driven instead of user-centric.

Topic: Easing access to recently used special character

Bug Tickets/Feature Requests:

Bug 34882: “Special character favorites”


  • There is no way to quickly re-use recently-picked special characters, forcing the user to search in the whole character map, which has no filter to narrow down results.
  • People writing scientific/legal essays or reports frequently need to insert accented letters and other characters.
  • Technical POV to special characters (alignment 15 by 15) instead of natural organization
  • The subset is limited by presetting a special font (some chars cannot be found when the wrong font is set-up)
  • Search function is missing (should be available for name, id, symbol…)
  • Weird interaction with selection first followed by copy/paste from Characters
  • No individualization like store last subset, or define ‘my own subset’

Screenshots of current UI


Figure 1: Current dialog to add special characters.

Features/Functional Requirements

Basic solution

  • Provide quick access to recently used special character
  • List of recently used items should be easy accessible (limit the number of items)
  • Sorting by last in, first out; items from the list of recently used chars are sorted to the beginning if selected
  • Access to recently used items from the toolbar

Extended solution

  • Natural organization, user-centric according
  • Advanced search (fuzzy unicode name)
  • Draw symbol to find the representation (Google like)
  • Users should be able to “pin” their favorite characters so that they remain in a fixed position within the list.
  • Associate a shortcut to symbol

Constraints for the design

  • Two options:
    • easy hack to realize required features
    • advanced solution for maximal improvement of UX

New design/Mockup

Basic solution


Figure 2: Proposal for a simple solution.

  • Add a grid with recently used items
  • Last used (new) char is added as left most item
  • Search by entering the unicode hex or decimal code (show as well the value of the selected item there)
  • Have a preview along with the unicode name
  • Remove the ‘characters’ field and add the current item directly to the document on double click
  • Provide access to recently used items from toolbar
  • Consider to have a predefined list of recent items to prefill the split button

Extended solution

The extended solution is inspired by Google Docs, with only a few improvements. Check it out to see the awesome OCR like function in action.


Figure 3: Extended solution: Basic layout.

  • Natural organization of unicode characters independent from current font
  • Double click to insert selection into document

Figure 4: Extended solution: Recently used items.

  • Add the item to the (long list of) recently used characters on double click (aka insert)
  • Access recently used characters as a special category
  • Access to the most frequently used via toolbar as shown in the simple solution

Figure 5: Extended solution: Search-by-name feature, detailed information.

  • Provide search by name, label, code with fuzziness
  • Show detailed information in tooltips

Figure 6: Extended solution: Graphical search.

  • Provide awesome OCR search feature


While the first option was designed with a good balance between effort and benefit in mind, the second solution would be really awesome. Of course is a challenge for developers and need further refinement in respect to the workflow, so please take this as a first idea.

As always we are interested in your comments. What do you think?

by Heiko Tietze at March 26, 2015 10:19 AM

Caolán McNamara

gtk3 vclplug, basic gesture support

gtk3's gesture support is the functionality I'm actually interested in, so now that presentations work in full-screen mode, I've added basic GtkGestureSwipe support to LibreOffice (for gtk3 >= 3.14) and hooked it up the slideshow, so now swiping towards the left advances to the next slide, to the right for the the previous slide.

by Caolán McNamara ( at March 26, 2015 09:35 AM

March 25, 2015

Mihai Varga

LibreOffice Online

LibreOffice Online will come as a collaboration between IceWarp and Collabora alongside with hundreds other devoted LibreOffice contributors. The product will be a an awesome cloud based solution which will enhance collaboration over different platforms and document formats, as LibreOffice online will be the first to offer native, full fidelity support for the Open Document Format (odf).
This new product will for sure be a top alternative for other online solutions such as Google Docs or Office 365.
In order to go online, we are going to make use of the Leaflet JavaScript library for tile rendering. Development is still in progress but you can checkout a short demo of smooth scrolling through a document.
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="" frameborder="0" height="266" src="" width="320"></iframe>

by Mihai Varga ( at March 25, 2015 09:50 PM

Charles Schulz

Introducing a new fork

Today, on the 25th of March 2015, my wife and I have the pleasure to introduce a most beautiful and the most amazing fork to date. And when it comes to forks, you can trust me.

Introducing Vallerand James Gerhard Schulz, our newborn son. May you possess strength, wisdom and beauty.




As a result, expect blog posts and updates to be slightly delayed or chaotic. There are more important things in life, after all. Take care everyone.

by Charles at March 25, 2015 09:14 PM

Michael Meeks

LibreOffice On-Line & IceWarp

Today we announced a collaboration between IceWarp and Collabora to start the creation of LibreOffice On-Line, a scalable, cloud-hostable, full featured version of LibreOffice. My hope is that this has a huge and positive impact for the Free Software community, the business ecosystem, personal privacy, and more. Indeed, this is really one of the last big missing pieces that needs solving (alongside the Android version which is well underway). But wait - this post is supposed to be technical; lets get back to the code.

A prototype - with promise

At the beginning of the LibreOffice project, I created (for our first Paris Conference) a prototype of LibreOffice On-Line using Alex Laarson's (awesome) GTK+ Broadway - you can still see videos of that around the place. Great as the Broadway approach is (it provides essentially a simple Virtual Desktop model into your browser), the prototype taught us several important things which we plan to get right in LibreOffice On-Line:

  • Performance - the Broadway model has the advantage of presenting the full application UI, however every time we want to do anything in the document - such as selecting, panning, or even blinking the cursor; we had to send new image fragments from the server: not ideal.
  • Memory consumption / Scalability - another side effect of this is that, no matter how un-responsive the user is (how many tabs are you long-term-not-looking-at in your browser right now) it was necessary to have a full LibreOffice process running to be responsive & store the document. That memory consumption naturally significantly limits the ability to handle many concurrent clients.
  • Scripting / web-like UI - it would have been possible to extend the gtk javascript to allow tunnelling bespoke commands through to LibreOffice to allow the wrapping of custom UI, but still the work to provide user interface that is expected on the web would be significant.

Having said all this, Broadway was a great basis to prove the feasibility of the concept - and we re-use the underlying concepts; in particular the use of web sockets to provide the low-latency interactions we need. Broadway also worked surprisingly well from eg. a nearby Amazon cloud datacentre. Similarly having full-fidelity rendering - is a very attractive proposition, independent of the fonts, or setup of the client.

An improved approach

Caching document views

One of the key realisations behind LibreOffice On-Line is that much of document editing is not the modification itself; a rather large proportion of time is spent reading, reviewing, and browsing documents. Thus by exposing the workings of document rendering to pixels squares (tiles) via LibreOfficeKit we can cache large chunks of the document content both on the server, and in the client's browser. As the users read though a document, or re-visit it, there is no need to communicate at all with the server, or even (after an initial rendering run) to have a LibreOfficeKit instance around there either.

Thus in this mode, the ability of the browser's Javascript to understand things about the document itself allows us to move much more of the pan/zoom reading goodness into your client. That means after an inital (pre)-fetch that responsiveness can be determined more by your local hardware and it's ability to pre-cache than remote server capacity. Interestingly, this same tiled-rendering approach is used by Fennec (Firefox for Android) and LibreOffice for Android to get smooth mobile-device scrolling and rendering, so LibreOfficeKit is already well adapted for this use-case.

Browser showing hidden tile cache ready to be revealed when panning
Editing live documents

In recent times, The Document Foundation has funded, via the generosity of TDF's donors a chunk of infrastructure work to make it possible to use LibreOfficeKit to create custom document editors. There are several notable pieces of this work that intersect with this; I provide some links to the equivalent work being done for Android from Miklos Vajna:

Cursors & selection

Clearly blinking a cursor is something we can do trivially in the javascript client, rather than on the server; there are however several other interactions that benefit from browser acceleration. Text selection is a big piece of this - re-rendering text on the server simply in order to draw transparent selection rectangles over it makes very little sense - so instead we provide a list of rectangles to render in the browser. Similarly, drawing selection handles and interacting with images is something that can be handled pleasantly in the browser as well.

Keyboard / touch input

Clearly it is necessary to intercept browser keystrokes, gestures and so on, transport these over the websocket and emit them into the LibreOfficeKit core.

Tile invalidation / re-rendering

Clearly when the document changes, it is necessary to re-render and provide new tile data to the client; naturally there is an existing API for this that was put in place right at the start of the Android editing work.

Command invocation

Another piece that is required, is transporting UNO commands, and state (such as 'make it bold', or 'delete it') from the client javascript through into the LibreOfficeKit core. This is a matter again of proxying the required functionality via Javascript. The plan is to make it easy to create custom, bespoke UIs with a bit of CSS / Javascript magic wrapped around and interacting with the remote LibreOfficeKit core.

Serializing selections

Clearly as & when we decide that a user has wandered off, we can save their intermediate document, serialize the cursor location & selection - free up the resources for some other editing process. As/when they return we can then restore that with some small document load delay, as we transparently back their cached view with a live editable LibreOfficeKit instance.

What does that look like roughly ?

Of course, lots of pieces are still moving and subject to change; however here is a perhaps helpful drawing. Naturally integrating with existing storage, orchestration, and security frameworks will be important over time, contributions welcome for your pet framework:

Initial architecture sketch

The case for simple collaboration

A final, rather important part of LibreOffice On-Line; which I've left to last is that of collaborative editing.

The problem of generic, asynchronous, multi-instance / multi-device collaborative document editing is essentially horrendous. Solving even the easy problems (ie. re-ordering non-conflicting edits) is non-trivial for any large set of potentially intersecting operations. However, for this case, there are two very significant simplifying factors.

First there is a single, central instance of LibreOfficeKit rendering and providing document tiles to all clients. This significantly reduces the need to a re-order asynchronous change operation stream, it is also the case that editing conflicts should be seen as they are created.

Secondly, there is a controlled, and reasonably tractable set of extremely high-level operations based on abstract document co-ordinates - initially text selection, editing, deletion, object & shape movement, sizing, etc. which can be incrementally grown over time to extend to the core set of editing functionality.

These two simplifications, combined with managing and opportunistically strobing between users' cursor & selection contexts should allow us to provide the core of the document editing functionality.

Show me the code

The code is available as of now in gerrit's online repository. Clearly it is the Alpha not the Omega; the beginning, and not even the end of the beginning - which is a great time to get involved


LibreOffice On-Line is just beginning, there is a lot that remains to be done, and we appreciate help with that as we execute over the next year for IceWarp. A few words about IceWarp - having spent a rather significant amount of time pitching this work to people, and having listened to many requests for it - it is fantastic to be working with a company that can marry that great strategic sense with the resources and execution to actually start something potentially market-changing here; go IceWarp !

March 25, 2015 09:00 PM

2015-03-25 Wednesday

  • Happy Document Freedom Day - great to see Collabora partner: write some helpful thoughts about it. Of course we have a nice banner / wrap - and a custom LibreOffice theme that looks like this for the event:
    LibreOffice with Document Freedom Day theme

March 25, 2015 09:00 PM

Cor Nouws

your most beautiful work with LibreOffice Writer

What brings more joy then publishing a guide on Document Freedom Day to help scholars making beautiful work with a free open document standard? Therefore today the Dutch Language LibreOffice-community made available the publication "Maak je mooiste werkstuk met LibreOffice Writer" ("Create your most beautiful work with LibreOffice Writer"). The guide is for scholars in the age of 11 and older.
Currently the publication is Dutch only, but will be available in other languages soon thanks to the ODF Authors-community.
There will also be a version for scholars in the age of 9-11 year.
Download here.
And read the full announcement here.

by Cor & OfficeBuzz ( at March 25, 2015 04:23 PM

Collabora Community

LibreOffice Online questions answered: what, who, how, and when


  1. Complete fidelity between LibreOffice desktop and LibreOffice Online
  2. All Writer, Calc, and Impress supported file-types supported
  3. Initially will include a basic HTML5 user interface
  4. Open development process from start to finish
  5. Expected by the end of the year

Questions and answers: all the details

What will the new application be called?
Provisionally it will be called: “LibreOffice Online” (LOOL)
Will it be hosted by The Document Foundation?
Yes: It will be hosted by The Document Foundation, and contributed to the LibreOffice project in the normal way, as was done for the Smoose / Collabora LibreOffice Viewer for Android, in accordance with Collabora’s open-first development policy.
Who will maintain LOOL after launch?
Collabora will maintain it alongside the LibreOffice community, and all are welcome to contribute to development.
How will document support compare to LibreOffice?
LOOL will include complete full document fidelity with LibreOffice desktop versions. All file types supported by Writer, Calc, and Impress will also be supported by LOOL, including OOXML and tens of other formats. No online office suite has achieved complete document fidelity across versions and devices. LOOL Would be the the first. Fidelity is achieved by using the same rendering engine as LibreOffice desktop (via LibreOfficeKit).
How will features compare to LibreOffice?
Editing features will initially be similar to LibreOffice Editor for Android. They will provide a subset of the features available in LibreOffice desktop versions.
What will be released at launch?
A new standalone LOOL server application, capable of serving a basic HTML5 web UI for viewing and editing documents.
When will LOOL be publicly released?
An initial release is expected by the beginning of 2016. Collabora has an open-first philosophy, all development work will be done in public, and can be followed and contributed to it as it develops.
What is the current status of LOOL development?
Work has already started and the results of this initial work will be shared shortly following this announcement.
When will the first public demos be available?
Video demonstrations are expected to coincide with the announcement on the 25th or shortly after.
What components will comprise the LOOL server?
1. LibreOfficeKit – an existing toolkit used by LibreOffice for Android and other LibreOffice projects, which houses the core document tiled rendering, layout, and calculating functionality of existing LibreOffice desktop applications.
2. An all-new tile server which communicates tiled images of documents to the browser, and manages the lifecycle of LibreOffice worker processes and cached image tiles.
What platforms will LOOL server support?
GNU/Linux will be supported at launch.
What languages will these components use?
Both LibreOfficeKit and the new tile server are written in C++.
What components will comprise the web client?
The web client will re-use and build upon the Leaflet JavaScript library for tile management and display, and shall be extended to show cursors and in-document selections. Other aspects of the user interface will be built upon Collabora’s existing Android UI work.
Will the web client require any addons or plugins?
No, the web client will use only JavaScript and HTML5.
What platforms will the web client target?
The web client should run on any device with a modern standards-compliant web browser.
What software license will LOOL use?
We anticipate uniform MPLv2 licensing for entirely new code, inline with The Document Foundation’s licensing model; although we are re-using and building on the Leaflet library which is BSD licensed.

Cloud image Copyright Kate Haskell, Creative Commons BY licensed.

by Sam Tuke at March 25, 2015 01:59 PM

Official TDF Blog

LibreOffice to become the cornerstone of the world’s first universal productivity solution

Berlin, March 25, 2015 – LibreOffice, the best free office suite ever, is set to become the cornerstone of the world’s first global personal productivity solution – LibreOffice Online – following an announcement by IceWarp and Collabora of a joint development effort. LibreOffice is available as a native application for every desktop OS, and is currently under development for Android. In addition, it is available on virtual platforms for Chrome OS, Firefox OS and iOS.

“LibreOffice was born with the objective of leveraging the OpenOffice historic heritage to build a solid ecosystem capable of attracting those investments which are key for the further development of free software,” says Eliane Domingos de Sousa, Director of The Document Foundation. “Thanks to the increasing number of companies which are investing on the development of LibreOffice, we are on track to make it available on every platform, including the cloud. We are grateful to IceWarp for providing the resources for a further development of LibreOffice Online.”

Development of LibreOffice Online started back in 2011, with the availability of a proof of concept of the client front end, based on HTML5 technology. That proof of concept will be developed into a state of the art cloud application, which will become the free alternative to proprietary solutions such as Google Docs and Office 365, and the first to natively support the Open Document Format (ODF) standard.

“It is wonderful to marry IceWarp’s vision and investment with our passion and skills for LibreOffice development. It is always satisfying to work on something that, as a company, we have a need for ourselves,” says Michael Meeks, Vice President of Collabora Productivity, who developed the proof of concept back in 2011 and will oversee the development of LibreOffice Online.

The availability of LibreOffice Online will be communicated at a later stage.

by italovignoli at March 25, 2015 01:12 PM

Collabora Community

IceWarp and Collabora Are Working on LibreOffice Online Document Editing, an Open Source Alternative to Google Apps, Office 365

Through a project contributed by IceWarp on the principles of Free Software, LibreOffice Online will become the trusted free alternative to proprietary solutions

Springfield, Washington Metro Area and Cambridge, United Kingdom – March 25, 2015.

Collabora, a leading contributor to the popular LibreOffice productivity application, has partnered with IceWarp, the provider of global messaging and collaboration solutions, to jointly develop web-based document editing technology and contribute these to the thriving Free Software community around LibreOffice.


  • IceWarp identified a growing demand for web-based and cloud-based document editing and collaboration, and selected LibreOffice as the leader in open standards productivity applications.
  • LibreOffice started the development of its rendering engine optimized for internet usage in 2011 and was on the lookout for a practical application, with the goal to provide the same quality of working with documents as on the desktop, but using just a web browser.
  • IceWarp with its enterprise solutions background and over 14 years of expertise will help LibreOffice to accelerate the development towards a real product which can be reused by the open source community in a wide range of deployment scenarios.
  • By creating a free alternative that any provider can implement without restrictions, the companies aim to restore fair competition to a market dominated by monopoly suppliers, to drive innovation, compatibility and interoperability through open formats, across all platforms and for everybody.

The lightweight document management features already built in the collaboration and messaging solution IceWarp Server allow users to store, manage and preview Office documents in the web browser, without having any office suite installed on their computers. To edit the documents, IceWarp provides a seamless connection between its web-based storage and productivity applications installed on user’s computer. The growing popularity of these features lead IceWarp developers to consider how best to do without an Office suite completely, and move it into the browser.

While there are several cloud-based solutions that can edit native Office formats with various degree of compatibility, none of them provides the same core values and format compatibility as LibreOffice, now being used by over 80 million active users around the world. Another challenge is that the online collaboration market is under a tight vendor lock-in, and all existing commercial API offerings are merely a window into a provider-owned cloud service. LibreOffice on the other hand, in its mission to eliminate digital divides and promote global electronic free speech, has already set to work on bringing the free Office suite into a web browser as early as 2011, but hasn’t materialized the technology into a product that everyone can use.

IceWarp and Collabora will work alongside over a thousand existing LibreOffice contributors to implement the whole online editing portion of the software, including the server-side provided by LibreOffice, and the client front-end based on HTML5 technology. The result will be a fully mature server solution, which any other provider, individual or project in the community can utilize for their applications and services. This will jump start document integration across services where it wasn’t possible before, bringing a whole new way of interactivity to how everyone works in the cloud.


“It is wonderful to marry IceWarp’s vision and investment with our passion and
skills for LibreOffice development. It is always satisfying to work on
something that, as a company, we have a need for ourselves. I’m looking
forward to using this myself, as well as our work together as a team.”
— Michael Meeks, Vice President, Collabora Productivity Ltd

“Creating alternatives is in our DNA. In the same way customers were looking
for Exchange alternatives and made IceWarp what it is today, they will be
seeking Google Apps alternatives and we will be ready.”
— Adam Paclt, IceWarp CEO

“LibreOffice Online will extend the availability of LibreOffice to the cloud,
adding collaboration features which have been asked for by many users. In the
future, a LibreOffice user will be able to seamlessly switch from the desktop
to mobile and to the cloud without leaving his free software environment of
choice.” — Thorsten Behrens, Chairman of The Document Foundation


  1. IceWarp Server
  2. LibreOffice
  3. LibreOffice Next Decade Manifesto
  4. Infographic
About IceWarp
IceWarp is a leading provider of comprehensive messaging solutions for every
business class, size and niche. Building upon a decade of enterprise e-mail
platforms experience, IceWarp offers organizations an all-in-one messaging and
collaboration solution that enables their workforce to communicate through any
platform, be it e-mail, mobile synchronization, chat, SMS, voice or video. The
highly scalable system is used by organizations of all sizes, from SMBs to
large corporations like Marriott International, Verizon Communications,
Inmarsat and Toyota.
About Collabora Productivity
Collabora Productivity delivers LibreOffice products and consulting. With the
largest team of certified LibreOffice engineers, it is a leading contributor
to the LibreOffice code base and community. LibreOffice-from-Collabora
provides a business-hardened office suite with long term multi-platform
support. Collabora Productivity is a division of Collabora Ltd., the global
software consultancy specializing in providing the benefits of Open Source to
the commercial world, specialising in automotive, semiconductors, digital TV
and consumer electronics industries.
About The Document Foundation (TDF)
The Document Foundation is an independent, self-governing and meritocratic
organization, based on Free Software ethos and incorporated in Germany as a
not for profit entity. TDF is focused on the development of LibreOffice – the
best free office suite ever – chosen by the global community as the legitimate
heir of OOo, and as such adopted by a growing number of public
administrations, enterprises and SMBs for desktop productivity.
TDF is accessible to individuals and organizations who agree with its core
values and contribute to its activities. At the end of January 2014, the
foundation has over 190 members and over 3,000 volunteer contributors

by Sam Tuke at March 25, 2015 12:58 PM

Celebrate Document Freedom Day: new theme for LibreOffice and Firefox

Today Collabora joins freedom advocates around the world to celebrate Document Freedom Day 2015. To help spread the message about the value of Open Standards we’ve rebranded our website, commissioned a special theme for LibreOffice, Firefox, and Thunderbird, and lined up an important announcement for lunch time today.

Screenshot of Document Freedom Day theme of

Our special web theme for today’s celebration

LibreOffice’s new clothes

If you’re using LibreOffice 4 then you can install new themes with just a few clicks. Today we release a Document Freedom Day theme for LibreOffice — follow the simple instructions below to install it. Many of this year’s Document Freedom Day events will demonstrate LibreOffice as an ideal entry point to ODF document editing — look out for the theme at an event near you!

LibreOffice using Collabora's Document Freedom Day theme

The new LibreOffice theme in action

Theme installation

LibreOffice themes uses the same system as Mozilla Firefox. To install the theme in LibreOffice simply:

  1. Click on the “Tools” menu in LibreOffice
  2. Select “Options”
  3. In the pop-up window that appears click “Personalization” in the left pane under “LibreOffice”
  4. Under “Firefox Themes” select “Own theme”
  5. Click “Select Theme and paste the following theme address into the box:

The theme also works for Firefox and Thunderbird – just visit the link above and click “Add to Firefox”.

Screenshot of LibreOffice setting up a new theme

Selecting a new theme in LibreOffice

More to come

More news is on it’s way — check back this afternoon for a major announcement that’s set to change open productivity.

by Sam Tuke at March 25, 2015 10:24 AM

March 24, 2015

Michael Meeks

2015-03-24 Tuesday

  • Prep for Document Freedom Day tomorrow; chewed a lot of mail; misc. calls. Late customer call.

March 24, 2015 09:00 PM

March 23, 2015

Michael Meeks

2015-03-23 Monday

  • Mail chew, lots of 1:1's. Lunch, team meeting, calls, another team meeting. More calls.

March 23, 2015 09:00 PM

Caolán McNamara

gtk3 vclplug, full-screen presentation canvas mode

Newly added simple support to the gtk3 vclplug for "canvas" support which is the thing we draw onto for presentations. Which means the gtk3 vclplug now supports full screen presentations. Which required a whole massive pile of reorganization of the existing canvas backends to move them from their own per-platform concept in canvas to the per-desktop concept in vcl.

So now rather than having only one cairo canvas backend based on the xlib apis which is for "Linux" we have a cairo canvas for each vcl plug. The old school xlib one is moved from inside its #ifdef LINUX in canvas to the shared base of the gtk2, kde, etc backends in vcl, and there is now a new one for gtk3

Presumably there are lots of performance gains to be made to the new canvas backend seeing as I'm just invalidating the whole slide window when the canvas declares that it's flush time but slides appear to appear instantaneously for me and fly ins and move around a patch effects are smooth even in -O0 debug mode so I'll hold back on any optimizations efforts for now.

by Caolán McNamara ( at March 23, 2015 01:08 PM

March 19, 2015

Miklos Vajna

Android editing: from selections to graphic handling

In from input handling to selections, I wrote about how we let LibreOffice Android app draw the selections around text content natively. A next step in this TDF-funded project is to provide selections around more UI elements: images and shapes.

Here are a number of challenges we (Tomaž Vajngerl and me) faced while we implemented this:

  • On Linux (the desktop), the move and resize operations are really similar: if you click near a resize handle (you "hit it"), then it’ll be a resize, otherwise it’ll be a move. Defining "near" means that you don’t have to click exactly at the center of the handle, but we allow some tolerance. Turns out that the tolerance depended on the pixel size of the handle drawn on the desktop: and because we don’t package the bitmaps of the desktop UI, that tolerance was 0.

  • Writer normally requires a click and a double-click to start editing shape text. One to select the shape and another to actually start the text editing. Instead of literally translating this to a tap and a long push, we wanted to start text editing right away if the user tapped on shape text.

  • Shape text doesn’t use the normal Writer text, but editeng — used by Impress and Calc, too. So we had to instrument the editeng module as well to expose the blinking cursor, so that if you tap inside the editeng text, you have some feedback where you are. Same is true for the cursor handle: once we knew where the cursor is, we could draw the cursor handle, but dragging it did nothing: now the setTextSelection() LOK API handles the case when the cursor is inside editeng text and can adjust the cursor position there, too.

  • On Linux, users got used to the following resize behavior: when images are resized, the aspect ratio is kept, but this is not the case for shapes. We wanted to keep this behavior on Android, too.

If you are interested how this looks like, here is a demo (click on the image to see the video):

Notice how the word selection in a table turns into a table selection, or how a long push inside an empty cell creates a selection containing only the empty cell.

An other direction we’re working towards is to show / hide the soft keyboard of Android as you would expect it. On Linux, it’s easy: the keyboard is always available. However on Android you should track when it makes sense to use the keyboard and when not — and show/hide automatically according to the context. Examples:

  • When you tap inside text, we show the keyboard.

  • When you finish editing, we hide it.

  • When you start scrolling, we hide it.

  • When you select an image, we hide it.

Additionally, we need to handle the situation when this automagic goes wrong. The Android soft keyboard has a button to hide itself, but we added a toolbar button to force-show it, too (click on the image to see the video):

Finally, Siqi Liu added a new callback type, allowing to tap on hyperlinks and handle them according to how you configured URL handling on your Android device. Here is a demo to show this in action:

That’s it for now — as usual the commits are in master (a few of them is only in feature/tiled-editing for now), so you can try this right now, or wait till the next Tuesday and get the Android daily build. :-)

March 19, 2015 11:26 AM

User Prompt

Libreoffice Design Session: CMIS Improvement

Topic of last week’s Libreoffice design session was the integration of content management interoperability services (CMIS). Here is the outcome of this meeting.

Topic: CMIS Improvement

Bug Tickets/Feature Requests:


  • CMIS is only accessible through LibreOffice’s custom file dialogs, which isn’t turned on by default
  • Setting up a new CMIS entry is not always easy/possible (AskLibO)
  • Feature is not visible to the average user
  • No straightforward integration into LO dialog (+/Server…, subtypes at CMIS)
  • No feedback during connection
  • No refresh/sync on changes, at least for Google Drive (LO overrides GD)

Screenshots of current UI


Figure 1: Servers are configured from the internal file dialogs and inserted under places.


Figure 2: A wide variety of services is available.

Features/Functional Requirements

  • Access from start center (to promote this) and toolbar/menu (for fast and easy use)
  • Own dialog since integration into standard dialog is possible but very limited
    • Libreoffice file dialog is removed completely (local files are opened via default dialog) and new dialog for remote files is introduced
  • Types: WebDAV, ftp, ssh, Windows Share, CMIS (with 10 subtypes incl. Google Drive); CMIS types get one level up
  • Synchronization will be most likely not possible

Heuristics/Nonfunctional Requirements

  • Developers should take care about feedback when implementing; that means feedback on access but more relevant when the file that is being saved was changed meanwhile (perfect solution would be a synchronization)

New design/Mockup


Figure 3: Implementation into start center like for ‘Open File’.

  • Access from the start center via additional item below Open File with a similar behavior

Figure 4: Dialog ‘Open Remote File’ (with parts of the toolbar).

  • Toolbar gets another open button; save gets the option to save remotely (alternatively it can be applied as an option in the button menu)
  • Dropdown ‘Service’ to select the predefined service that contains of type plus name
  • Breadcrumb, folder view, and file list for navigation; filter function for ‘unorganized users’
  • File name allows to enter the name when this dialog is used to save a document; otherwise the caption Save is replace by Load

Figure 5: Three layouts of the dialog to add a service: Empty, Google Drive (GD), and WebDAV.

  • Provide a selection for the type first
  • Types are organized and sorted user-centric (apps, connections, protocols)
  • Most simple type is GD with just the email address
  • Every dialog has the option to change the label (later from dropdown ‘Add service > Edit’) that is filled with a default
  • Dialog’s complexity depends on the connection type
  • Password is asked when connecting to the service but might be included here in advance


First discussion with Libreoffice UX experts revealed an issue when the configuration dialog contains the type of service: The amount of information depends on the type which leads to a ‘jumping’ dialog. Solutions might be: a) accept the change of dialog height (as it is right now), b) introduce some kind of wizard where you first have to select the type and configure it on another page, and c) select the types from ‘Add Service’ and move the functions edit and delete (the selected service) to another button. What do you think?

by Heiko Tietze at March 19, 2015 10:45 AM

March 18, 2015

Caolán McNamara

gtk3 vclplug,

I've been hacking the gtk3 vclplug for LibreOffice recently, here's the before image after scrolling up and down a few times. UI font not rendered the same as the rest of the desktop, bit droppings everywhere, text missing from style listbox, mouse-wheel non-functional

 Here's today's effort. Correct UI font, scrolling just works, mouse-wheel functional, no bit droppings.

After making it possible to render with cairo to our basebmp surface initially for the purposes of rendering text, I tweaked things so that instead of re-rendering everything in the affected area on a "draw" signal we do our initial render into the underlying basebmp surface on resize events and then trust that our internally triggered paints will keep that basebmp up to date and gtk_widget_queue_draw_area those areas as they are modified in basebmp and just blit that basebmp to the gtk3 cairo surface on the resulting gtk_widget_queue_draw_area- triggered "draw". This is pretty much what we do for the MacOSX backend.

The basebmp is now cairo-compatible so the actual LibreOffice->Gtk3 draw becomes a trivial direct paint to the requested area in the gtk surface from our basebmp surface

With our cairo-compatible basebmp surface the gtk3 native rendering stuff for drawing the buttons and menus etc can then render directly into that basebmp at the desired locations removing a pile of temporary surfaces, conversion code and bounds-checking hackery.

Further under the hood however the headless svp plug that the gtk3 inherits from had a pair of major ultra-frustrating bugs which meant that while it looked good in theory, in practice it still was epically failing wrt bit dropping in practice. Now solved are the two underlying clipping-related bugs. One where an optimization effort would trigger creating an overly clipped region, and another where attempts to copy from the surface were clipped out by the clip region.

Still got some glitches in the impress sidebar and of course the above theming engine is still missing a pile of stuff and slide-show/canvas mode needs implementing, but I'm heartened. Its not complete, but its now less traffic accident and more building site.

by Caolán McNamara ( at March 18, 2015 03:41 PM

March 17, 2015

Stephan Bergmann


In C++, two types that have the same (fully qualified) name must be the same. The One Definition Rule (ODR) is there to ensure that, even if the types appear in different compilation units. C++ implementations are not required to diagnose all ODR violations, but they could be observed when features relying on runtime type information (exception handling, dynamic_cast, -fsanitize=vptr, …) start to misbehave.

What the C++ standard does not cover is how to pick unique names (by picking unique namespaces). That prevents composability. If you have two C++ libraries whose source code you cannot change, you cannot in general link them into the same program, as that might cause ODR violations.

What the C++ standard does not cover either is dynamic libraries and symbol visibility across such libraries. That lead C++ implementations to slightly bend the ODR rules in order to mitigate the above problem: If two different types that happen to have the same fully qualified name are hidden in two different dyanmic libraries, that is technically still an ODR violation, but no problems can arise from it. (In theory, it might also be possible for each library to place all its “internal” entities into unnamed namespaces, which would solve the same problems. In practice, however, a library’s developers will most likely want to spread it across multiple compilation units and still use such internal entities across compilation units, where unnamed namespaces would no longer work.)

A key aspect of the above is how to hide a type in a dynamic library. RTTI for a type is typically represented in C++ implementations as a set of symbols, one of them denoting a C-style string representation of the (mangled) type name. Then, if a C++ implementation uses comparison of string addresses (and not of string contents) to determine type equality, and if the RTTI string symbols are not coalesced across dynamic libraries at runtime (e.g., by keeping them non-exported), then hiding works.

And, on the other hand, an important corollary of the above is that if uses of a type across multiple dynamic objects shall be considered equal (so that e.g. an instance of that type can be thrown in one dynamic library and caught in another), the corresponding RTTI string symbols do need to be coalesced at runtime (e.g., by exporting them as weak symbols from all the dynamic objects using them).

That is why the Itanium C++ ABI mandates address comparison of RTTI strings. It is not only faster (to do the comparison, at least; though not necessarily to load the dynamic libraries and resolve the weak symbols for those types that shall be considered equal across dynamic libraries), it also enables composability.

However, a third thing the C++ standard does not cover is dynamic loading of (dynamic) libraries, with mechanisms like POSIX dlopen. An invocation of dlopen can be either RTLD_LOCAL or RTLD_GLOBAL. While RTLD_GLOBAL makes the exported symbols of the loaded library available to subsequently loaded libraries (which is important if there is any types that shall be considered equal across those libraries, see above), RTLD_LOCAL does not—and can thus break things like throwing exceptions across libraries.

The RTLD_LOCAL problem caused GCC maintainers to deviate from the Itanium C++ ABI and compare RTTI string symbols by content rather than by address. That implies breaking composability.

Clang still sticks to the Itanium C++ ABI’s by-address comparison, but that difference between GCC and Clang does not normally make a difference on Linux: Things like exception handling and dynamic_cast are handled by the C++ runtime library, and on Linux that is GCC’s libstdc++ regardless whether you compile wih GCC or Clang.

One case where it does make a difference is -fsanitize=function and -fsanitize=vptr, UBSan checks detecting certain undefined behavior involving function pointers or polymorphic object types, respectively. For Clang, the RTTI comparisons internally done by those checks are hard-coded in the compiler and would not call into libstdc++. So, when using -fsanitize=undefined:

by stbergmann at March 17, 2015 02:23 PM

Jacobo Aragunde Pérez

Creating new document providers in LibreOffice for Android

We recently completed our tasks for The Document Foundation regarding the Android document browser; nonetheless, we had a pending topic regarding the documentation of our work: write and publish a guide to extend the cloud storage integration. This blog post covers how to integrate new cloud solutions using the framework for cloud storage we have implemented.

Writing a document provider

Document Provider class diagram

We have called “document providers” to the set of classes that implement support for some storage solution. Document providers will consist of two classes implementing the IDocumentProvider and IFile interfaces. Both contain extensive in-code documentation of the operations to help anybody implementing them.

The IDocumentProvider interface provides some general operations about the provider, addressed to provide a starting point for the service. getRootDirectory() provides a pointer to the root of the service, while createFromUri() is required to restore the state of the document browser.

The IFile interface is an abstraction of the java File class, with many similar operations. Those operations will be used by the document browser to print information about the files, browse the directories and open the final documents.

Once those classes have been implemented, the new provider must be linked with the rest of the application by making some modifications to DocumentProviderFactory class. Touching the initialize() method to add a new instance of the provider to the providers[] array should be enough:

    // initialize document providers list
    instance.providers = new IDocumentProvider[3];
    instance.providers[0] = new LocalDocumentsDirectoryProvider();
    instance.providers[1] = new LocalDocumentsProvider();
    instance.providers[2] = new OwnCloudProvider(context);

At this point, your provider should appear in the drawer that pops-up with a swipe gesture from the left of the screen.

LibreOffice for Android, provider selection

You are encouraged to create the classes for your document provider in a separate package inside Your operations may throw a RuntimeException in case of error, it will be captured by the UI activity and the message inside the exception will be shown, so make sure that you are internationalizing the strings using the standard Android API. You can always take a look to the existing providers and use them as an example, specially OwnCloudProvider which is the most complex one but still quite manageable.

Making use of application settings

If you are implementing a generic provider for some cloud service, it is quite likely that you need some input from the user like a login name or a password. For that reason we have added an activity for configuration related with document providers.

To add your settings in that screen, modify the file res/xml/documentprovider_preferences.xml and add a new PreferenceCategory that contain your own ones. The android:key attribute will allow you to use the preference from your code; you may want to add that preference string as a constant in DocumentProviderSettingsActivity.

At this point, you will be able to use the preferences in your DocumentProvider using the standard Android API. Take OwnCloudProvider as an example:

    public OwnCloudProvider(Context context) {
        // read preferences
        SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(context);
        serverUrl = preferences.getString(
                DocumentProviderSettingsActivity.KEY_PREF_OWNCLOUD_SERVER, "");

Finally, we have added a way for providers to check if settings have changed; otherwise, the application should be restarted for the changes to take effect. Your provider must implement the OnSharedPreferenceChangeListener, which brings the onSharedPreferenceChanged() method. That method will be called whenever any preference is changed, you can just check the ones you are interested in using the key parameter, and make the changes required to the internal state of your provider.

Preference listener class diagram


This effort has been one more step towards building a full featured version of LibreOffice for Android, and there are improvements we can, or even must do in the future:

  • With the progression of the work in the LibreOffice Android editor we will have to add a save operation to the document providers that takes care of uploading the new version of the file to the cloud.
  • It would be a good idea to implement an add-on mechanism to install the document providers. That way we would not add unnecessary weight to the main package, and plugins could be independently distributed.

That’s all for now; you can try the ownCloud provider building the feature/owncloud-provider-for-android branch yourself, or wait for it to be merged. We hope to see other members of the community taking advantage of this framework to provide new services soon.

by Jacobo Aragunde Pérez at March 17, 2015 11:54 AM

March 15, 2015

David Tardon

Document Liberation Project regression testing

Why this post?

I am writing this as a direct response to Miklos’s blog post on the same theme. Miklos argues against the current setup for regression testing that all our import libraries use. I do not believe his approach would be substantially better than the current one. I will try to summarize my thoughts about it in the following text. I, however, admit that the current setup is not quite perfect and I can envision some improvements…

How the current regression test suites work

For every import library, there is a separate repository that contains the regression test suite. That consists from sample documents and pre-generated output files in several formats, which are generated by command line conversion tools that every library provides. Most important of these is the so-called “raw” format: it is simply a serialization of the librevenge API calls. Additional output formats include ODF and SVG for graphic libraries.

The test suite is driven by two perl scripts: checks that the current output matches the saved output and writes a diff file for any difference; updates the current output files. These scripts are copied from test suite to test suite and adapted for the current use (e.g., which formats are checked, location of the test directories, etc.)

Better way? Or maybe not…

This section discusses pros and cons of Miklos’s approach in the context of DLP import libraries. It uses citations from Miklos’s blog post.

Better focused checks

Being automatically generated, you have no control over what part of the output is important and what part is not — both parts are recorded and when some part changes, you have to carefully evaluate on a case by case basis if the change is OK or not.

This is not as big deal as it would seem, especially if the changes are checked regularly and the test repository is kept updated. Usually the changes are quite localized and easy to verify.

Single-point failure

… from time to time you just end up regenerating your reference testsuite and till the maintainer doesn’t do that, everyone can only ignore the test results — so it doesn’t really scale.

The test suite is in gerrit, next to the main repository. If someone submits a fix for review, he can submit an update to the test suite too. I admit that the two changes would not be linked in any way, but we do not get that many contributions for that to be a problem. And it would be possible to make the test suite a submodule of the main repository, which would fix this.

No way to forget to run the tests

Provided that make distcheck is ran before committing, you can’t forget to clone and run the tests.

As a de-facto release engineer for the majority of DLP‘s libraries, I have got a check list of things to do before a new release. Running the regression tests is just one item on that list.

Less prone to unrelated changes

Writing explicit assertions means that it’s rarely needed to adjust existing tests.

On the other side, it is an extra work to write them. And, more importantly, to keep them in sync with the code, so they cover everything that is necessary. With the current approach, any change in the output is immediately visible.

Possible to commit code change + test in a single commit

Having testcase + code change in the same commit is one step closer to the dream …

Not my dream, though. I prefer to push test cases as separate commits anyway…

To be fair, the current approach would makes it rather difficult to run the test suite for an older checkout, because there is no association to a particular commit in the test repository. But I do not think I have ever needed this, so I do not see it as a problem.

Big increase in size of the main repository

LibreOffice’s code is huge–20 MB of test files would be about 1% of the size. This is not true for the libraries we are talking about. Their size if several MB at most, so addition of a number of data files immediately shows up in the repository size. It also shows up in release tarball’s size, which is even more important point.

Let me show an anecdotal example: the current size of unpacked tarball of libetonyek is 3 MB. The cumulative size of the test documents in its test repository is 24 MB. And these documents only cover Keynote 5 format…

Testing of multiple versions of a format induces copy-paste

We typically have tests for multiple versions of the same file format. These also often have approximately the same content over all versions. I assume that, when adding a new test file that is based on similar file produced by a different version of the application, the test case would most probably be copied from test case for that other file. That means that if a change is needed later (e.g., to add a new check), it has to be duplicated over several places. This increases the risk that some of the test cases will not be updated.

Possible improvements

Diff is not always good enough

If there is a change in the output, generates a diff. This, however, is not always the best way to show the changes. In some cases, word diff (e.g., generated by dwdiff) would be much better.

Dependency on other libraries

All the output generators are implemented in external libraries. This is not a problem for the “raw” output, as this is not expected to change. But ODF output is often affected by changes in libodfgen. Unfortunately, this also means that the tests only work with a specific version of libodfgen–typically the current master. This is a problem and I think that our decision to test ODF conversion in the libraries’ test suites was wrong and counter-productive. IMHO the output generators should be tested in the libraries that implement them.

This is already partly done for libodfgen, as we have test code that generates various ODF documents programmaticaly. But the output is just saved to files that must be examined manually–there is no automated check of the output. IMHO Miklos’s approach would be really beneficial here.


While the current regression testing setup is not perfect, there is no need to radically change it, as the proposed alternative does not really add many benefits. The biggest concern is a considerable increase in size of the release tarballs. However, we should limit the tests to the use of the raw format and move tests of output generators to the libraries that implement them. It makes sense to use Miklos’s approach to test these.

by David Tardon at March 15, 2015 02:13 PM

March 14, 2015

Miklos Vajna

Document Liberation Project regression testing

Earlier I wrote about my setup to hack libvisio. One missing bit was testing the contributed code. Testing can be performed at various levels, so far DLP libraries were tested by recording the output of the various foo2raw tools and then comparing the current output to some previously known good state. This has a number of benefits:

  • If you know that the current state is good, then there is no need write testcases, you can just record your state automatically.

  • Any change in the output fill signal instant failure, so it gives pretty good test coverage.

The same technique was used in LibreOffice for Impress testcases initially, however we saw a drawback there: Being automatically generated, you have no control over what part of the output is important and what part is not — both parts are recorded and when some part changes, you have to carefully evaluate on a case by case basis if the change is OK or not. The upshot is that from time to time you just end up regenerating your reference testsuite and till the maintainer doesn’t do that, everyone can only ignore the test results — so it doesn’t really scale.

In short, both techniques have some benefits, but given that the libvisio test repo is quite empty, I thought it’s a good time to give an other method (what we use quite successfully in LO code) a go, too. This method is easy: instead of recording the whole output of some test tool, output a structured format (in this case XML), and then just assert the interesting part of it using XPath. Additionally, these tests are in libvisio.git, so you can nicely put the code change and the testcase in the same commit. So the hope is that this is a more scalable technique:

  • Provided that make distcheck is ran before committing, you can’t forget to clone and run the tests.

  • Writing explicit assertions means that it’s rarely needed to adjust existing tests. Which is a good thing, as there are no tests for the tests, so touching existing tests should be avoided, if possible. ;-)

  • Having testcase + code change in the same commit is one step closer to the dream e.g. the git.git guys do — they usually require documentation, code and test parts in each patchset. :-)

Technically this method is implemented using a librevenge::RVNGDrawingInterface implementation that generates XML. For now, this is part of libvisio, so in case you want to re-use it in some other DLP library, you need to copy it to your import library, though if indeed multiple importers start to use it, perhaps it’ll be moved to librevenge. The rest of the test framework is a simple testsuite runner and a cppunit TestFixture subclass that contains the actual test cases.

So in case you are planning how to test your import library, then now you have two options, evaluate them and choose what seems to be the better tool for your purpose.

March 14, 2015 07:09 PM

March 13, 2015

Thorsten Behrens

Announcing odpdown, a markdown-to-ODP converter

Over the years, I met a number of people — trainers, speakers, hackers — who produce a lot of slide-ware (decks often more than a hundred pages wide), but never really use the conventional presentation packages for it.

Instead, they script things, (partially) autogenerate text, and use one of the markup-to-presentation converters out there: latex beamer, showoff, pandoc, or S9 (there’s much more).

Asking them why, the answer is always one or more of the below:

  • I prefer my text editor over the slide editing package
  • I want to track my content in a revision control system
  • I auto-generate content, or need to frequently merge stuff
  • and (rather infrequently): because PowerPoint is just sooo not cool

I can’t really help with the last item, but the first three clearly resonate well with me. While I was preparing slides for a hands-on session about ceph last year, I thought it would be pretty nice to get some tool hacked up that formats all the shell commands in nice typewriter script, and let me re-generate slides after every major rework (we were in the middle of moving demo setup to AWS, i.e. the ground was changing under our feet substantially). Born was the first rough cut of odpdown:

Why I didn’t re-use one of the existing tools from above? Well, the event in question had strict requirements on the design and formatting of the slides, so I was stuck with a given Impress or PowerPoint slide template. And secondly, I think the auto-generation software space in the ODF ecosystem is under-developed — this is therefore to some extent a showcase for what is possible, and what existing libraries are there to build upon.

So the initial hack has since been refined a lot, and test-driven by a few people (12 issues filed in two days by Adam Spiers, I was embarrassed!). Therefore today I feel confident enough to announce version 0.4.1 as a beta release a bit more widely:

Using it should be a matter of installing the package (manual installation instructions and a quick usage howto here), typing up some markdown, and calling it thusly:

     odpdown \
     --break-master=break_slides --content-master=content_slides \ corp_template.odp out_slides.odp

A quick walk-through PDF for basic markup is available here:

Basic markup

Basic markup


Conversely, a quick walk-through PDF for more advanced markup is here:

Advanced markup

Advanced markup


Have fun!

Filed under: LibreOffice Tagged: impress, markdown, odp

by thorstenb at March 13, 2015 12:29 AM

March 12, 2015

User Prompt

Libreoffice Design Session: Entries at Indexes and Tables

The Libreoffice UX team started last week with another type of meeting, the design session. The goal is to discuss one issue in detail along with possible solutions, mostly a mockup. This includes a clear description of the issue, functional and non-functional requirements, as well as constraints for the design. Based on these artifacts we design the mockup together via screen sharing.

Last week’s topic was the Entries tab on the dialog Indexes and Tables.


  • Abbreviations hinder understanding of content → Entries should have the full text
  • Horizontal usage of space is cumbersome; we have much more room for vertical arrangement
  • Interactions are neither clear from layout nor the captions
  • Setting up entries and formatting/styling should get merged
  • Design is not appealing (at least the list with levels is too small)
  • Summarized in bug report #89608

Screenshot of the current dialog


Figure 1: Current dialog

Functional Requirements

  • Entries
    Chapter number (E#), Heading Text (E), Page number (#), Hyperlink start/end (LS,LE), Tabstop (T),Text Entry
  • Style and formatting
    Formatting of entries (e.g. number with or w/o separator), Style of entry, Format (Tab position relative…) → should rather be moved into Index/Table since it affects the whole table of contents (TOC) and not a particular entry
  • Functions
    Add entry, Delete an entry (via keyboard only), Sort (newly introduced; nice to have), All (Apply to all levels)
  • Levels
  • Live preview

Non-functional requirements / Constraints for the design

  • Information of the current scope is shown in one dialog using tabs to structure the content
  • No hidden features; ‘expanded’ controls are favored
  • Edit options first, apply to doc later
  • Full flexible and customizable by user

Proposal for new layout


Figure 2: New layout.

  • Change list of levels from full view to drop down
  • Add paragraph style from other tab into the form
  • Have full names for entries
  • Update labels
  • Add sorting feature
  • Use more descriptive labels


  • Click Add, select type of entry from dropdown, and have a new item below the current selection (selection changes to the new one)
  • Specify options for the added entry
  • Select an entry and get the respective options to change
  • Move entry up/down
  • Delete selected entry


The first discussion with developers and experts was about labels and the fact that the entries depend on the type of index (first tab). So we have not only the table of contents (TOC) but as well Alphabetical Index, Illustration Index, Index of Tables, User-Defined Index, Table of Objects, and Bibliography. Two solutions come in mind: First, the concept allows to change the list content from TOC items to bibliography related information. The workflow wouldn’t be affected. Second, we could offer the new content only in case of TOC and keep the current for all other. What do you think?

We plan to do the design sessions regularly. This week we will discuss the integration of content management systems. You are welcome to join us on Friday 1pm UTC.

by Heiko Tietze at March 12, 2015 12:06 PM

Björn Michaelsen

Following the White Rabbit

When logic and proportion have fallen sloppy dead
And the white knight is talking backwards
And the red queen’s off with her head
Remember what the dormouse said
Feed your head, feed your head

– Jefferson Airplane, White Rabbit

So, this was intended as a quick and smooth addendum to the “50 ways to fill your vector” post, bringing callgrind into the game and ensuring everyone that its instructions counts are a good proxy for walltime performance of your code. This started out as mostly as expected, when measuring the instructions counts in two scenarios:

implementation/cflags -O2 not inlined -O3 inlined
A1 2610061438 2510061428
A2 2610000025 2510000015
A3 2610000025 2510000015
B1 3150000009 2440000009
B2 3150000009 2440000009
B3 3150000009 2440000009
C1 3150000009 2440000009
C3 3300000009 2440000009

The good news here is, that this mostly faithfully reproduces some general observations on the timings from the last post on this topic, although the differences in callgrind are more pronounced in callgrind than in reality:

  • The A implementations are faster than the B and C implementations on -O2 without inlining
  • The A implementations are slower (by a smaller amount) than the B and C implementations on -O3 with inlining

The last post also suggested the expectation that all implementations could — and with a good compiler: should — have the same code and same speed when everything is inline. Apart from the A implementations still differing from the B and C ones, callgrinds instruction count suggest to actually be the case. Letting gcc compile to assembler and comparing the output, one finds:

  • Inline A1-3 compile to the same output on -Os, -O2, -O3 each. There is no difference between -O2 and -O3 for these.
  • Inline B1-3 compile to the same output on -Os, -O2, -O3 each, but they differ between optimization levels.
  • Inline C3 output differs from the others and between optimization levels.
  • Without inlinable constructors, the picture is the same, except that A3 and B3 now differ slightly from their kin as expected.

So indeed most of the implementations generate the same assembler code. However, this is quite a bit at odd with the significant differences in performance measured in the last post, e.g. B1/B2/B3 on -O2 created widely different walltimes. So time to test the assumption that running one implementation for a minute is producing reasonable statistically stable result, by doing 10 1-minute runs for each implementation and see what the standard deviation is. The following is found for walltimes (no inline constructors):

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 80.6 s 78.9 s 78.9 s 79.0 s
A2 78.7 s 78.1 s 78.0 s 79.2 s
A3 80.7 s 78.9 s 78.9 s 78.9 s
B1 84.8 s 80.8 s 78.0 s 78.0 s
B2 84.8 s 86.0 s 78.0 s 78.1 s
B3 84.8 s 82.3 s 79.7 s 79.7 s
C1 84.4 s 85.4 s 78.0 s 78.0 s
C3 86.6 s 85.7 s 78.0 s 78.9 s
no inline measurements

no inline measurements

And with inlining:

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 76.4 s 74.5 s 74.7 s 73.8 s
A2 75.4 s 73.7 s 73.8 s 74.5 s
A3 76.3 s 74.6 s 75.5 s 73.7 s
B1 80.6 s 77.1 s 72.7 s 73.7 s
B2 81.4 s 78.9 s 72.0 s 72.0 s
B3 80.6 s 78.9 s 72.8 s 73.7 s
C1 81.4 s 78.9 s 72.0 s 72.0 s
C3 79.7 s 80.5 s 72.9 s 77.8 s
inline measurements

inline measurements

The standard deviation for all the above values is less than 0.2 seconds. That is … interesting: For example, on -O2 without inlining, B1 and B2 generate the same assembler output, but execute with a very significant difference in hardware (5.2 s difference, or more than 25 standard deviations). So how have logic and proportion fallen sloppy dead here? If the same code is executed — admittedly from two different locations in the binary — how can that create such a significant difference in walltime performance, while not being visible at all on callgrind? A wild guess, which I have not confirmed yet, is cache locality: When not inlining constructors, those might be in CPU cache from one copy of the code in the binary, but not for the other. And by the way, it might also hint at the reasons for the -march= flag (which creates bigger code) seeming so uneffective. And it might explain, why performance is rather consistent when using inline constructors. If so, the impact of this is certainly interesting. It also suggest that allowing inlining of hotspots, like recently done with the low-level sw::Ring class, produces much more performance improvement on real hardware than the meager results measured with callgrind. And it reinforces the warning made in that post about not falling in the trap of mistaking the map for the territory: callgrind is not a “map in the scale of a mile to the mile”.

Addendum: As said in the previous post, I am still interested in such measurements on other hardware or compilers. All measurements above done with gcc 4.8.3 on Intel i5-4200U@1.6GHz.

by bmichaelsen at March 12, 2015 10:44 AM

March 11, 2015

Markus Mohrhard

Supporting more OOXML dialects in chart import

A common problem during our OOXML import is that there are several different OOXML dialects: OOXML transitional, OOXML strict and the not specified version written by MSO 2007. The MSO 2007 version is mostly identical to OOXML transitional with the small but nasty exception that they have some differences in the default values. Recently I got a document from a Collabora customer using MSO 2007 exhibiting some bugs related to that.

A few days ago I finally managed to bring support for handling the differences between the OOXML dialect written by MSO 2007 and the one in the specification to LibreOffice. This is an important step forward for our OOXML chart import as that code was written against the MSO 2007 version and more and more documents are generated by newer MSO versions. In recent years we have changed quite a few of the default values in the code to handle OOXML specification conforming documents correctly. Sadly this introduced a number of regressions for the handling of MSO 2007 documents.

With [1] and [2] we are now able to recognize files that have been created by MSO 2007 and are able to use different default values. Currently this is only used for the flag that decides if the chart title is deleted but more cases might be fixed in the future.

by Markus Mohrhard at March 11, 2015 08:27 PM

March 10, 2015

Collabora Community

LibreOffice 4.5 to provide PDF signing and timestamping

Collabora Productivity has completed integration of trusted timestamping and digital signing into LibreOffice. Used extensively by governments and information security companies, these features make LibreOffice the first comprehensive Open Source PDF signing solution. The work was commissioned by Swiss non-profit Wilhelm Tux after a successful crowdfunding campaign in October.

Trusted timestamping will be released in upcoming LibreOffice 4.5. It securely tracks the creation and modification of documents — once a document has been timestamped, it is impossible to compromise or dispute its integrity. Certificate signing of PDFs, which Collabora recently published in LibreOffice 4.4, guarantees a document’s origin and authenticity. Combined, these new features enable LibreOffice to generate documents suitable for a wide variety of secure and legal settings.

“The addition of signatures and timestamps to LibreOffice makes LibreOffice the obvious for a range of buyers” says Michael Meeks, Vice President at Collabora Productivity. “These enterprise features are the latest to cater to professional users, and reflect the demanding environments to which LibreOffice is being deployed”.

The signatures that are produced are interoperable with all PDF readers supporting the PDF/A standard, including products from Adobe. The signing process makes use of certificates and cryptography native to the operating system used, with Windows versions of LibreOffice using Microsoft’s included certificate manager. Mac and Linux versions include the NSS cryptographic library shared by Mozilla Firefox. Industry standard x.509 certificates are used for signing documents, and can be obtained from a wide range of certificate authorities.

Trusted timestamping implements IETF standard RFC 3161, and requires validation from a Time Stamping Authority (TSA). Several TSA’s provide the service free of charge, while Open Source TSA server applications may be deployed and operated independently.

“The use of PDF signatures and timestamps is required by Swiss law in many regulations concerning the exchange of documents with Government bodies, including the EÖBV for notary documents, and ElDI-V for tax receipts” said Markus Wernig, Chairman of Wilhelm Tux. “I’m pleased and amazed that Collabora have achieved the features in such a short time”.

About Collabora Productivity:
Collabora Productivity delivers LibreOffice products and consulting. With the largest team of certified LibreOffice engineers, it is a leading contributor to the LibreOffice code base and community. LibreOffice-from-Collabora provides a business-hardened office suite with long term multi-platform support. Collabora Productivity is a division of Collabora Ltd., the global software consultancy specializing in providing the benefits of Open Source to the commercial world, specialising in automotive, semiconductors, digital TV and consumer electronics industries.
About Wilhelm Tux:
Wilhelm Tux is a Swiss non-profit, non-government organisation founded in 2002. It’s focus is advocating Free and Open Source Software, the use of Open Standards in the public sector, and on protection of digital civil liberties. The group and it’s partners work to establish a favourable environment for the adoption of Free Software in Switzerland. In the past Wilhelm Tux served as a member of the constituting committee of the Swiss standardization body eCH, and has since taken an active role in public debate on digital issues within the public sector.

by Sam Tuke at March 10, 2015 09:40 AM

March 05, 2015

Jacobo Aragunde Pérez

New features in LibreOffice for Android document browser

The Document Foundation recently assigned one of the packages of the Android tender to Igalia; in particular, the one about cloud storage and email sharing. Our proposal comprised the following tasks:

  • Integrate the “share” feature of the Android framework to be able to send documents by email, bluetooth or any other means provided by the system.
  • Provide the means for the community to develop integration of cloud storage solutions.
  • Implement ownCloud integration as an example of how to integrate other cloud solutions.
  • Extensive documentation of the process to integrate more cloud solutions.

The work is completed and the patches available in the repository; most of them are already merged in master, while ownCloud support lives in a different branch for now.

Sharing documents

The Android-provided share feature allows to send a document not only through email but through bluetooth or any available methods, depending on the software installed in your device.

We have made this feature available to users through a context menu in the document browser, which pops up after a long press on a document.

Context menu in Android document browser

Share from the Android document browser

Support for cloud storage solutions

This task consisted of creating an interface to develop integration of any cloud storage solution. The first step was abstracting the code that made direct access to the file system, so it could be replaced by the different implementations of storage services, which from now on will be denominated document providers.

Afterwards, we created two document providers for local storage: one to access the internal storage of the device and another one to conveniently access the Documents directory inside the storage. These two simple providers served as a test for the UI to switch between both; we used the Android drawer widget, which pops-up with a swipe gesture from the left of the screen.

Side drawer in Android document browser

All the operations in the Android document browser were being performed in the same thread. Besides being suboptimal, the development framework actually forbids running network code in the main thread of the application. The next step for us was isolating the code that might need networking access when interacting with a cloud provider, and run it in separate threads.

ownCloud document provider

At that point, we had everything in place to write the code to access an ownCloud server. We did it with the help of an Android library provided by ownCloud developers.

There was still another task, though; any cloud service will likely need some configuration from the user for login credentials and so. We had to implement a preferences screen to enter these settings and do the proper wiring for the provider to be able to listen to any changes in them.

ownCloud settings screen


To help other developers writing new document providers, we have tried to document the new code in detail, specially those interfaces that must be implemented to create new document providers. Besides, we will publish a document explaining how to extend the cloud storage integration here soon.

That’s all for now; to try the ownCloud provider you will have to build the feature/owncloud-provider-for-android branch yourself, while you will find the share feature in the packages already available in the Play Store or F-Droid. Hope you enjoy it!

by Jacobo Aragunde Pérez at March 05, 2015 03:46 PM

Björn Michaelsen

LibreOffice around the world

Around the world, Around the world
— Daft Punk, Around the world

So, you still heard that unfounded myth that it is hard to get involved with and to start contributing to LibreOffice? Still? Even though that there are our Easy Hacks and the LibreOffice developer are a friendly bunch that will help you get started on mailing lists and on IRC? If those alone do not convince you, it might be because it is admittedly much easier to get started if you meet people face to face — like on one of our upcoming Events! Especially our Hackfests are a good way to get started. The next one will be at the University de Las Palmas de Gran Canaria were we had been guests last year already. We presented some introduction talks to the students of the university and then went on to hack on LibreOffice from fixing bugs to implementing new stuff. Here is how that looked like last year:

LibreOffice Hackfest Gran Canaria 2014

LibreOffice Hackfest Gran Canaria 2014

One thing we learned from previous Hackfests was that it is great if newcomers have a way to start working on code right away. While it is rather easy to do that as the 5 minute video on our wiki shows, it might still take some time on some notebooks. So what if you spontaneously show up at the event without a pre-build LibreOffice? Well for that, we now have — thanks to Christian Lohmaier of the Document Foundation staffremote virtual machines prepared for Hackfests, that allow you to get started right away with everything prepared — on rather beefy hardware even, that is.

If you are a student at ULPGC or live in Las Palmas or on the Canary Islands, we invite you to join us to learn how to get started. For students, this is also a very good opportunity get involved and prepare for a Google Summer of Code on LibreOffice. Furthermore, if you are a even casual contributor to LibreOffice code already and want to help out sharing and deepen knowledge on how to work on LibreOffice code, you should get in contact with the Document Foundation — while the event is already very soon now, there still might be travel reimbursal available. You will find all the details on the wiki page for the Hackfest in Las Palmas de Gran Canaria 2015.

LibreOffice Evening Hacking

LibreOffice Evening Hacking in Las Palmas 2014

On the other hand, if two weeks is too short a notice for you, but the rest of this sounds really tempting, there is already the next Hackfest planned, which will take place in Cambridge in the United Kingdom in May. We will be there with a Hackfest for the first time and invite you to join us from anywhere in Europe if you either are a LibreOffice code contributor or if you are interested in learning more on how to become one. Again, there is a wiki page with the details on the LibreOffice Hackfest in Cambridge 2015, and travel reimbursals are available. Contact us!

LibreOffice Evening Hacking

How I imagine Cambridge in May — Photo by Andrew Dunn CC-BY-SA 2.0 via Wikimedia

by bmichaelsen at March 05, 2015 12:28 PM

Charles Schulz

Climbing the winding stairs of Emacs

My earlier reports about my interest and use of emacs had mostly focused on editing code or text, mostly CSS, html and org files. I had considered using emacs for email reading and processing, and I had backed away from it although I wanted to give myself the time to consider more options and get even more acquainted with the various tools and modes available for the famous text editor. Today, I’d like to share with you my latest progress and how my choices to invest more on Emacs shape the way I will be using my desktop in the near future.

Increasing my daily use of Emacs

Aside thinking about email clients on Emacs, I also thought that if I were to do anything else than my (almost) daily editing of org files and note taking, I would need to increase the possible use cases involving emacs on a regular basis. As I wrote in my post this Summer, I use Emacs mostly for its fantastic Org-mode, this notes/organizer/GTD/project management/calendaring set of tools. I use Emacs from time to time (say a few times a month) to edit CSS and html files, sometimes even quite complex CSS. More rarely javascript or python files will come in my way and I may need to edit them. The “programming” angle of Emacs does exist for me, but it would alone keep my use of Emacs only somewhat more frequent than, say, the Gimp. If I had continued working on web sites on an almost daily basis like in 2013, I would use it a lot. These day this type of activity is not infrequent for me, but it certainly does not happen daily; and it falls under the categories of “hobbies” and “family and friends stuff”.

In this context, it is Org-mode that really took off as my main and daily use of Emacs. But Org-mode did carbon-emacs-iconmake me curious about the power and the possibilities of Emacs. Using it more, even on a daily basis, would then mean using other tools, which is what I have actually started doing:

  • RSS reading : After a frustrating experience with the native rss reader of Emacs, Newsticker, that led me to the conclusion that it does indeed have its own XML standard against everbody else including the W3C and the OASIS, I opted for Elfeed. It is fast, rather simple, and works pretty much out of the box.
  • Note taking through Org-Capture: It’s not enough to use the Scratch buffer, it’s nice to actually take notes and manage them; well to be honest, I don’t do refilings nor attachments yet; but actually taking notes with Org-mode is quite nice.
  • Drafting some documents and blog posts with the MarkDown mode (I’m investigating direct exports to WordPress with org2blog)
  • Browsing some web pages with eww, at least the ones that are really mostly about text
  • Having fun with themes, which, oddly enough, involves installing packages, dabbling with Emacs packages repositories.
  • Configuring Emacs utilities, such as the Recent tool
  • Trying to do more stuff from within Emacs, such as opening a document (.doc, PDF, ODF…) only for a short consultation, or browsing tweets (yes!)
  • I plan to investigate dired (file manager) soon.

You may now start to notice the recurring idea behind these use cases: all of them aim at increasing my ability to stay within emacs when doing things on my computer. Is it absolutely useful? Not overly so, because I don’t master all these tools to the point where I can gain time in a significant way; but that’s one of the goals I keep in mind. Do I get to learn a lot? Absolutely, yes. Learning all this does make me go both through setbacks and leaps forward. I’m starting to get to the point where I instinctively use Emacs keybindings for other programs, and where I no longer even have the instinct to look for the mouse when on Emacs. I’m pretty happy with all this. But I had to continue and come back to the mail challenge.

Emacs an Email

Before configuring any email “client” or component on Emacs I had to think a bit about which one would be best for me. I took some time at this surprisingly crowded list of available options. In my view, emacs email viewers, indexers or anything that remotely would look like a client fall in two broad categories (the Unix philosophy and design precludes anything like Evolution or Thunderbird and relies instead on specific tools for each type of usage: fetching mail, sending mail, etc. typically are not part of the “email clients” on Emacs). These two categories are not based on features, but rather on generation of tools. The first one is the old, historical category where you have Gnus, Rmail and tools like VM, MH-E and a few more. The new category tends to be composed of smaller and more recent projects, doing one thing well but not much more than that: NotMuch Mail, Mu4e, Wanderlust and Mew. There are others of course, but they’re just less known.

My requirements are these: IMAP access, local storage relies on MailDir or MH, multiple accounts are required. After having read quite a lot about the tools mentioned above, I decided once again to in favor of mu4e. Gnus was way too complex and on top of this seems to be rather slow, especially when used with MailDir as a backend. RMail is somewhat more interesting and simpler, but its primary reliance on mbox would have me pipe all sorts of odd hacks to save my mail as MailDir. VM did not seem to be developed anymore and does not meet several of my requirements anyway.

I got interested by Mew; Wanderlust seemed rather buggy judging by the general feedback; NotMuch Mail really hooked me up for a while and it was the most serious contender to mu4e. NotMuch is first of all a mail indexer, just like mu is the indexer of mu4e, but the two are different in how they search and index mail. The development of NotMuch Mail is a bit more recent and some really nice things are happening in this project. One thing ticked me off at first, and then I realized that mu4e’s longer existence had made a few more things available in terms of interface that ultimately made me choose this software over NotMuch Mail: NotMuch Mail tags email. It does not seem to have an approach based on folders; sorting emails may thus be powerful but in my eyes rather unsettling. The interface of mu4e is better suited for threads management as well.

This time, I set and configured mu4e for my three main accounts, which amounts roughly to 5 Go of emails. The indexing and retrieving is indeed wonderfully fast; I guess now that I use this tool not just with 50 Mo of emails in one IMAP account, I can experience why people love non-Gui email clients and indexers that much. So far I’m not getting rid of Claws-Mail, which sits on my primary machine; and I keep Evolution at hand, were it only for calendaring. I need to learn the keybindings, set quite a lot of details  (html view, vertical split view, which seems to work well with mu4e). I’m not using this on a daily basis, but I’ll invest more time when every little setting is properly configured.

I really like this experience. Emacs’ winding stairs may be at times difficult, but overcoming them is definitely rewarding.

What about word processing?

One of the most well-known epiphanies about emacs is that you can do everything with it if you realize that you manipulate insanely high amounts of text as a developer or as an “information worker”. This seems to suggest that office suites like LibreOffice could become obsolete for someone using Emacs or any other powerful text editor. If you plan and can use LaTex and that it covers the sum of your needs when sending documents to other people, I guess it’s great and you don’t have to rely on an office suite. The same goes for spreadsheets if all you need is simple tables and simple presentations as well. But life tends to be more complicated than that, and after some investigation, I do not plan on learning Latex anytime soon: LibreOffice does all I need when it comes to creating and editing complex visual documents. I can organize my thoughts on Emacs; but I won’t be able to properly draft the same document on Emacs. I know how Latex works; but very few people can rely on Latex to edit back the same document. I also believe there’s an actual elegance in using office suites when they’re being properly used, especially when it comes to applying styles and working with them as the baseline of document creation: this is something that seems to elude inexperienced users as much as the power users of LaTex and text editors in general. To me the debate between writing your content and then formatting it vs. writing and formatting at the same time is not a relevant one: an office suite can help you structure both the form (the document) and your thoughts (its content) very effectively: here again, it’s all about learning!

by Charles at March 05, 2015 11:32 AM

March 04, 2015

Andreas Mantke

New LibreOffice Extensions Testsite online again

The new test website for LibreOffice extensions with an improved structure is online again. You could visit it at:

I could update the test website with my changes from the last days. It has now better descriptions on some fields and more validation features for user input. If you want to volunteer for testing the new website and its new structure and  features, you could ask for an account there using the entry in the bottom of the site: ‘Host your product’. There is a form with the necessary data linked to create a new user account.

I worked a bit further on the software for the new website  today and commited my changes to the TDF github repository. They will be visible on the test website with the next update.

by andreasma at March 04, 2015 09:05 PM

March 03, 2015

Tim Janik

Apache SSLCipherSuite without POODLE

In my previous post Forward Secrecy Encryption for Apache, I’ve described an Apache SSLCipherSuite setup to support forward secrecy which allowed TLS 1.0 and up, avoided SSLv2 but included SSLv3. With the new PODDLE attack (Padding Oracle On Downgraded Legacy Encryption), SSLv3 (and earlier versions) should generally be avoided. Which means the cipher configurations discussed [...]

by timj at March 03, 2015 11:05 PM

Andreas Mantke

Status of AddOn Development for New LibreOffice Extensions-Site

I got some tester for the new LibreOffice Extensions-Site after a call with  some German speaking community member and received some valuable feedback from the volunteer tester. Thus I could improve the new addon for the Plone driven site. I committed this code work already to the TDF github repository. My changes covers usability improvements of the forms as well as additional validation features. But I could not update the LibreOffice extensions test-website with my code changes, because the virtual machine that runs the site’s Plone instance is currently down.

by andreasma at March 03, 2015 09:47 PM

Chris Sherlock

OSI membership

I've just joined the OSI as a member - in AUD it is about $50, but it's well worth it!

The reason I became a member was because I thought it was high time that I did this. I contribute to LibreOffice, and I truly believe that open source and open culture is very important to society at large. I believe that open source gives maximum freedom to those in society who are not necessarily empowered due to economic or social circumstances. It levels the playing field, and what I most love is that it really gives transparency to those who use software so that they can verify and improve upon the work of those who have gone before them, without restricting the ability of others to use the work to improve the conditions and lives of others in society.

I encourage others to also join the OSI, as it is really is a force of good in the world.

by Chris Sherlock ( at March 03, 2015 12:50 AM

March 02, 2015

Björn Michaelsen

50 ways to fill your vector …

“The problem is all inside your head” she said to me
“The answer is easy if you take it logically”
— Paul Simon, 50 ways to leave your lover

So recently I tweaked around with these newfangled C++11 initializer lists and created an EasyHack to use them to initialize property sequences in a readable way. This caused a short exchange on the LibreOffice mailing list, which I assumed had its part in motivating Stephans interesting post “On filling a vector”. For all the points being made (also in the quick follow up on IRC), I wondered how much the theoretical “can use a move constructor” discussed etc. really meant when the C++ is translated to e.g. GENERIC, then GIMPLE, then amd64 assembler, then to the internal RISC instructions of the CPU – with multiple levels of caching in addition.

So I quickly wrote the following (thanks so much for C++11 having the nice std::chrono now).


#include <vector>
struct Data {
    Data(int a);
    int m_a;
void DoSomething(std::vector<Data>&);


#include "data.hxx"
// noop in different compilation unit to prevent optimizing out what we want to measure
void DoSomething(std::vector<Data>&) {};
Data::Data() : m_a(4711) {};
Data::Data(int a) : m_a(a+4711) {};


#include "data.hxx"
#include <iostream>
#include <vector>
#include <chrono>
#include <functional>

void A1(long count) {
    while(--count) {
        std::vector<Data> vec { Data(), Data(), Data() };

void A2(long count) {
    while(--count) {
        std::vector<Data> vec { {}, {}, {} };

void A3(long count) {
    while(--count) {
        std::vector<Data> vec { 0, 0, 0 };

void B1(long count) {
    while(--count) {
        std::vector<Data> vec;

void B2(long count) {
    while(--count) {
        std::vector<Data> vec;

void B3(long count) {
    while(--count) {
        std::vector<Data> vec;

void C1(long count) {
    while(--count) {
        std::vector<Data> vec;

void C3(long count) {
    while(--count) {
        std::vector<Data> vec;

double benchmark(const char* name, std::function<void (long)> testfunc, const long count) {
    const auto start = std::chrono::system_clock::now();
    const auto end = std::chrono::system_clock::now();
    const std::chrono::duration<double> delta = end-start;
    std::cout << count << " " << name << " iterations took " << delta.count() << " seconds." << std::endl;
    return delta.count();

int main(int, char**) {
    long count = 10000000;
    while(benchmark("A1", &A1, count) < 60l)
        count <<= 1;
    std::cout << "Going with " << count << " iterations." << std::endl;
    benchmark("A1", &A1, count);
    benchmark("A2", &A2, count);
    benchmark("A3", &A3, count);
    benchmark("B1", &B1, count);
    benchmark("B2", &B2, count);
    benchmark("B3", &B3, count);
    benchmark("C1", &C1, count);
    benchmark("C3", &C3, count);
    return 0;


main: main.o data.o
    g++ -o $@ $^

%.o: %.cxx data.hxx
    g++ $(CFLAGS) -std=c++11 -o $@ -c $<

Note the object here is small and trivial to copy as one would expect from objects passed around as values (as expensive to copy objects mostly can be passed around with a std::shared_ptr). So what did this measure? Here are the results:

Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 without inline constructors:

implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
A1 89.1 s 79.0 s 78.9 s 78.9 s
A2 89.1 s 78.1 s 78.0 s 80.5 s
A3 90.0 s 78.9 s 78.8 s 79.3 s
B1 103.6 s 97.8 s 79.0 s 78.0 s
B2 99.4 s 95.6 s 78.5 s 78.0 s
B3 107.4 s 90.9 s 79.7 s 79.9 s
C1 99.4 s 94.4 s 78.0 s 77.9 s
C3 98.9 s 100.7 s 78.1 s 81.7 s

creating a three element vector without inlined constructors
And, for comparison, following are the results, if one allows the constructors to be inlined.
Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 with inline constructors:

implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
A1 85.6 s 74.7 s 74.6 s 74.6 s
A2 85.3 s 74.6 s 73.7 s 74.5 s
A3 91.6 s 73.8 s 74.4 s 74.5 s
B1 93.4 s 90.2 s 72.8 s 72.0 s
B2 93.7 s 88.3 s 72.0 s 73.7 s
B3 97.6 s 88.3 s 72.8 s 72.0 s
C1 93.4 s 88.3 s 72.0 s 73.7 s
C3 96.2 s 88.3 s 71.9 s 73.7 s

creating a three element vector without inlined constructors
Some observations on these measurements:

  • -march=... is at best neutral: The measured times do not change much in general, they only even slightly improve performance in five out of 16 cases, and the two cases with the most significant change in performance (over 3%) are actually hurting the performance. So for the rest of this post, -march=... will be ignored. Sorry gentooers. ;)
  • There is no silver bullet with regard to the different implementations: A1, A2 and A3 are the faster implementations when not inlining constructors and using -Os or -O2 (the quickest A* is ~10% faster than the quickest B*/C*). However when inlining constructors and using -O3, the same implementations are the slowest (by 2.4%).
  • Most common release builds are still done with -O2 these days. For those, using initializer lists (A1/A2/A3) seem too have a significant edge over the alternatives, whether constructors are inlined or not. This is in contrast to the conclusions made from “constructor counting”, which assumed these to be slow because of additional calls needed.
  • The numbers printed in bold are either the quickest implementation in a build scenario or one that is within 1.5% of the quickest implementation. A1 and A2 are sharing the title here by being in that group five times each.
  • With constructors inlined, everything in the loop except DoSomething() could be inline. It seems to me that the compiler could — at least in theory — figure out that it is asked the same thing in all cases. Namely, reserve space for three ints on the heap, fill them each with 4711 and make the ::std::vector<int> data structure on the stack reflect that, then hand that to the DoSomething() function that you know nothing about. If the compiler would figure that out, it would take the same time for all implementations. This doesnt happen either on -O2 (differ by ~18% from quickest to slowest) nor on -O3 (differ by ~3.6%).

One common mantra in applications development is “trust the compiler to optimize”. The above observations show a few cracks in the foundations of that, esp. if you take into account that this is all on the same version of the same compiler running on the same platform and hardware with the same STL implementation. For huge objects with expensive constructors, the constructor counting approach might still be valid. Then again, those are rarely statically initialized as a bigger bunch into a vector. For the more common scenario of smaller objects with cheap constructors, my tentative conclusion so far would be to go with A1/A2/A3 — not so much because they are quickest in the most common build scenarios on my platform, but rather because the readability of them is a value on its own while the performance picture is muddy at best.

And hey, if you want to run the tests above on other platforms or compilers, I would be interested in results!

Note: I did these runs for each scenario only once, thus no standard deviation is given. In general, they seemed to be rather stable, but this being wallclock measurements, one or the other might be outliers. caveat emptor.

by bmichaelsen at March 02, 2015 10:47 PM

Caolán McNamara

gtk3 vclplug, text rendering via cairo

The LibreOffice gtk3 vclplug is currently basically rendering everything via the "svp" plugin code which renders to basebmp surfaces and then blits the result of all this onto the cairo surface belonging to the toplevel gtk3 widget

So the text is rendered with the svp freetype based text rendering and looks like this...

With some hacking I've unkinked a few places and allowed the basebmp backend to take the same stride and same same rgbx format as cairo, so we can now create a 24bit cairo surface from basebmp backing data which allows us to avoid conversions on basebmp->cairo and allows us to render onto a basebmp with cairo drawing routines, especially the text drawing ones. So with my in-gerrit-build-queue modifications it renders the same as the rest of the gtk3 desktop.

by Caolán McNamara ( at March 02, 2015 03:16 PM

February 28, 2015

Andreas Mantke

LibreOffice Extensions Site – Change Language Settings

I worked today on the language setting of some of the LibreOffice extensions, which are hosted on the LibreOffice extensions site. The authors of this extensions set the language setting to their native language or to English. Both settings has some effect on the visibility of their projects for users with different language setting (they will not see this projects). Thus I changed the language setting to language independent. I found this projects, releases and files, that has not a language independent setting because the portal_catalog tool of the Plone – the content management system we used for the site – created an index for the language setting, that I could browse every now and then.

by andreasma at February 28, 2015 10:15 PM

February 27, 2015

Miklos Vajna

Tiled editing: from input handling to selections

In from a living document to input handling, I wrote about how we handle touch and on-screen keyboard events in the LibreOffice Android app. A next step in this TDF-funded project is to provide more UI elements which are specific to touch devices: selections is one of them.

Here are the problems we had to solve to get this working:

  • Long push is not an event core would recognize.

  • If you use the mouse and have a selection in Writer, it’s only possible to extend the end of it. If you use the keyboard, then it’s possible to shrink the end of it, but still no adjustment of the start. On touch devices, it’s natural to have selection handles at the start and end of the selection and be able to adjust both, in both directions.

  • Additionally, when the user drags the selection handles, the expected behavior is that the position of the selection and the handle are never the same: the handle is placed below the selection position and when you drag the handle, the new selection position is above the handle… ;-)

Long push is reasonable to map to double mouse click, as in both cases e.g. in Writer the user expects to have a select word action. But for the adjustment of selections, we really had to define a new API (lok::Document::setTextSelection()) to allow setting the start or end of the selection to a new logical (in document coordinates, not paragraph / character indexes) point.

If you are interested how this looks like, here is a demo:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="420"></iframe>

An other direction we’re working towards is to have the same features in other applications as well: Impress and Calc. Perhaps not so surprisingly, we hit similar problems in these applications as well that we had to solve in Writer. The typical problems are:

  • LibreOffice assumes a given portion of the document is visible (visual area), but the Android view is independent from what LO thinks is visible. Example: LO thinks a table is not visible, so it doesn’t send the selection events for the text inside the table, even if it’s in fact visible on the Android app.

  • Instead of calling Invalidate() and waiting for a timer to call Paint(), at some places direct Paint() is performed, so the tile invalidation notification triggered by Invalidate() is missing → lack of content on Android.

  • We render each tile into a VirtualDevice — kind of an off-screen rendering  — and at some places LO assumed that certain content like the actively edited shape’s text is not interesting, as it’s not interesting "during printing".

  • LO’s mouse events are in pixels, and then this is translated to mm100 (hunderd of milimeters) or twips in core. So counting in pixels is the common language, while the Android app counts everything in twips, and doesn’t want to care about what would be visible at what pixel on the screen, if LO would run in desktop mode. So we had to make sure that we can pass in event coordinates in twips, and get invalidation coordinates in twips, even if previously it was a mix of mm100, twips and pixels.

Here is how Impress looks like, with working tile invalidation, touch and keyboard handling:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="420"></iframe>

Calc is lagging a bit behind, but it also has working tile invalidation and keyboard handling:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="420"></iframe>

That’s it for now — as usual the commits of me and Tomaž Vajngerl are in master (a few of them is only in feature/tiled-editing for now), so you can try this right now, or wait till the next Tuesday and get the Android daily build. :-)

February 27, 2015 11:14 AM

February 26, 2015

Official TDF Blog

LibreOffice 4.4.1 “Fresh” is available for download

Berlin, February 26, 2015 – The Document Foundation announces LibreOffice 4.4.1, the first minor release of LibreOffice 4.4 “fresh” family, with over 100 fixes over LibreOffice 4.4.0. The release represents the combined effort of the over 900 developers attracted by the project since September 2010, with at least three new developers joining the project each month for 60 months in a row.

New features introduced by the LibreOffice 4.4 family are listed on this web page:

The Document Foundation suggests to deploy LibreOffice in enterprises and large organizations when backed by professional support by certified people (a list is available at:

People interested in technical details about the release can access the change log here: (fixed in RC1) and (fixed in RC2).

Download LibreOffice

LibreOffice 4.4.1 and LibreOffice 4.3.6 are immediately available for download from the following link: LibreOffice users, free software advocates and community members can support The Document Foundation with a donation at Money collected will be used to grow the infrastructure, and support marketing activities to increase the awareness of the project, both at global and local level.

by italovignoli at February 26, 2015 11:59 AM