Taoffi's blog

prisonniers du temps

covid-5ync – tackling covid-19

NCBI (National Center for Biotechnology Information) offers a vast database of DNA sequences.

I visited their site to see if I can find information about the last version of the menacing coronavirus.

Yes, that is available. It is precisely named: coronavirus 2 (SARS-CoV-2), and a long list of its DNA sequences is there.

I had worked, long years ago, on DNA sequences analysis and felt like giving a try to see how that sequence can be presented… just to see!

My old app (MFC, C++ app) could open and analyze the sequence. DNA sequence analysis is a long story. As far as I could learn: locate repeats (fragments that are repeated on the sequence), locate 'hairpins' (fragments of complementary nucleotides: a<=>T and g<=>c)… etc.

The old app did not seem quite handy to manipulate the downloaded sequence, so I started writing a new WPF one.

A few hours later, the app could display the sequence in a somehow 'visual appealing' UI, which invited to go ahead for some more significant work.

Covid-19 is not for fun!

Yes, it is not really for fun! I am not yet sure how such work can be useful, but whatever effort everyone can provide might be of help in defeating this new danger. Let us start and see!

For now, what I intend to do is:

  • Port the biotechnology features of the old app to a new handy UI;
  • Publish the app online for biotechnology engineers working on the subject: and get their feedback
  • Upload the source code to github for IT community feedback and contributions

It is a very small step in a long way to defeat that epidemic.

More on this in the next few days / weeks.

Be safe!

doc5ync – word index web page presentation!

Objectives:

  • Display a cloud of index words, each dimensioned relative to its occurrences in e-book information (e-book title, description, author, editor… etc.)
  • On selection of a given word: display the list of e-book references related to the selected word
  • On selection of an e-book: display its information details and all words linked to it

Context:

doc5ync web interface is based on a meta-model engine (simpleSite, currently being renamed to web5ync!).

I talked about meta-models in a past post Here, with some posts about its potential applications here.

The basic concept of meta-models is to describe an object by its set of properties and enable the user to act on these properties by modifying their values in the meta-model database. On runtime, those property values are assigned to each defined object.

In our case, for instance, we have a meta-model describing web page elements, and a meta-model describing the dataset of word index and their related e-books.

For web page elements, the approach considers a web page as a set of html tags (i.e. <div>, <table><tr><td>…, <img… etc.). Where each tag has a set of properties (style, and other attributes) for which you can define the desired values. On runtime, your meta-model-defined web page comes to life by loading its html tags, assigning to each the defined values and injecting the output of the process to the web response.

A dataset is similarly considered as a set of rows (obtained through a data source), each composed of data cells containing values. Data cells can then be either presented and manipulated through web elements (html tags, above) or otherwise manipulated through web services.

Data storage and relationships

As we mentioned in the previous post, index words and their related e-books are stored in database tables as illustrated in the following figure:

Each word of the index provides us with its number of occurrences in e-book text sequences (known on Trie scan).

Html formatting using a SQL view

To reflect this information into a presentation, we used a view to format a html div element for each word relative to its number of occurrences. The query looks like the following code

select
  w.id            as word_id
 , w.n_occurs
 , N'<div style="BASIC STYLE STRING HERE…; display:inline;'

-- add the font-size style relative to number of occurrences
+
case
    when w.n_occurs between 0 and 2    then N' font-size:10pt;'
    when w.n_occurs between 3 and 8    then N' font-size:14pt;'
    when w.n_occurs between 9 and 15    then N' font-size:16pt;'
    when w.n_occurs between 16 and 24    then N' font-size:22pt;'
    when w.n_occurs between 25 and 2147483647    then N' font-size:26pt; '
end

+ N'"'
-- add whatever html attributes we need (hover/click…)
+
N' id="div' + convert(nvarchar(32), w.id)
+ N'" onclick="select_data_cell(''' + convert(nvarchar(32), w.id) + N''');" '
     as word_string_html

-- add other columns if needed
from dbo.doc5_trie_words w
order by w.word_string

 

The above view code provides us with html-preformatted string for each word index in the data row.

Tweaking the data rows into a cloud of words

On a web page, a dataset is commonly displayed as a grid (table / columns / rows), and web5ync knows how to read a data source, and output its rows into that form. But that did not seem to be convenient in our case, because it simply displays index words each on a row which is not really the presentation we are looking for!

 

To resolve this, we simply need to change the dataset web container from a <table> (and its containing rows / cells) into <div> tags (with style=display: inline).

Here a sample of html code of the above presentation:

<table>
  <tr>
    <td>
    <div style="font-size:16pt;" onclick="select_data_cell('29008');">After</div>
  </td>
</tr>
<tr>
  <td>
    <div style="font-size:26pt; onclick="select_data_cell('28526');">after</div>
  </td>
</tr>

<!-- the table rows go on... --> 


And here is a sample html code of the presentation we are looking for:

<div style="display:inline;">
    <div id="td_word_string_html82" style="display:inline;">
       <div style="font-size:26pt;" onclick="select_data_cell('28526');">after</div>
      </div>
</div>

<div style="display:inline;">
    <div id="td_word_string_html83">
      <div style="font-size:16pt;" onclick="select_data_cell('29008');">After</div>
    </div>
</div>

<div style="display:inline;">
   <div style="display:inline;">
      <div style="font-size:10pt;" onclick="select_data_cell('17657');">AFTER</div>
    </div>
</div>

 

Which looks closer to what we want:

Interacting with index words

The second part of our task is to allow the user to interact with the index words: clicking a word = display its related e-books, clicking an e-book = display the e-book details + display index words specifically linked to that e-book.

For this, we are going to use a few of the convenient features of web5ync, namely: Master/details data binding, and Tabs. (I will write more about these features in a future post)

Web5ync master/details binding allows linking a subset of data to a selected item in the master section. Basically, each data section is an iframe. The event of selecting a data row in one iframe can update the document source of one or more iframes. All what we need is: 1. define a column that will be used as the row's id, and 2. define how the value of that id should be passed to the target iframe (typically: url parameter name).

Tabs are convenient in our case as they will allow distributing the information in several areas while optimizing web page space usage.

In the figure above, we have 3 main data tabs: n Explore by words, n Document info and n Selected document words.

On the first tab:

  • clicking a word (in the upper iframe) should display the list of its related e-books (in lower iframe of that same tab)
  • clicking an e-book row on the lower iframe should: first displays its details (an iframe on the second tab), and display all words directly linked to the selected e-book (an iframe in the 3rd tab). (figures below)

In that last tab, we can play once again with the displayed words, to show other documents sharing one of them:

doc5ync–Trie database integration process

I continue here the excursion around using the Trie pattern and structures to index e-book words for the doc5ync project.

If you missed the beginning of the story, you can find it Here, Here and Here

The role of the client integration tool (a WPF app) is to pull e-books information to be indexed from the database, proceed to indexing the words and creating the links between each word and its related e-book. This is done using some settings: the language to index, the minimum number of chars to consider a sequence as a ‘word’… etc.

trie-with-data-db-integration-process

The integration process flow is quite simple:

  • Once we are happy with the obtained results, we use the tool to push the trie to the database in a staging table.
  • A database stored procedure can then extract the staging data into the tables used for presenting the index on the project web page.

trie-web-page

The staging table has a few fields:

  • The word string
  • The related e-book ID (relationship => docs table (e-books))
  • The number of occurrences of the word
  • The timestamp of the last insertion

The only difficulty encountered was the number of records (often tens of thousands) to push to the staging table. The (artisanal!) solution was to concatenated values of  blocks of records to be inserted (I.e.:  ‘insert into table(field1, field2, …) values ( v1, v2, …), (v3, v4, …), …’ etc.). Sending 150 records per command seemed to be a sustainable choice.

The staging table data is to be dispatched into two production tables:

  • doc5_trie_words:
    • word ID
    • language ID
    • word string
    • word’s number of occurrences
    • comments

 

  • doc5_trie_word_docs:
    • word ID (relationship => the above table)
    • e-book ID (relationship => docs (e-books) table)

 

Once the data is in the staging table, the work of the stored procedure is quite straightforward:

  • Delete the current words table (which cascade deletes the words / docs reference records)
  • Import the staging word (strings and occurrences) records into doc5_trie_words
  • Import the related word / doc IDs into doc5_trie_word_docs.

Many words are common between languages and e-books. Therefore assigning a language to a word has no sense unless all its related documents are from one specific language. That is the additional and final task of the stored proc.

Next step: the index web page presentation!

That will be the subject of the next post!

doc5ync Trie integration tool - UI Tag cloud, paging and navigation

The integration client tool I talked about in the previous post, we need to display the list of words in a way similar to tag clouds in blog post.

For this we will use a ListView with some customization (see Xaml code below).

Paging

A more important question is the number of items to show. As we, in most cases, have thousands of items to display, we need a paging mechanism.

A solution – a Linq extension - proposed by https://stackoverflow.com/users/69313/wim-haanstra was a good base for a generic paging module:

 

// credit: https://stackoverflow.com/users/69313/wim-haanstra
// usage: MyQuery.Page(pageNumber, pageSize)
public static class LinqPaging
{
    // used by LINQ to SQL
    public static IQueryable<TSource> Page<TSource>(this IQueryable<TSource> source, int page, int pageSize)
    {
        return source.Skip((page - 1) * pageSize).Take(pageSize);
    }

    // used by LINQ
    public static IEnumerable<TSource> Page<TSource>(this IEnumerable<TSource> source, int page, int pageSize)
    {
        return source.Skip((page - 1) * pageSize).Take(pageSize);
    }

}

trie-with-data-paging-base

iObjectPaging is now a generic class accepting any collection of data to be paged through calls to the Linq extension.

Let us derive, from this base, a two specific paging classes: one for our words and another for our documents (DataItems):

trie-with-data-paging-2

That is all what we need for paging. Each list will simply assign its data to the corresponding paging object and the UI elements will be bound to the CurrentPageData collection. Next / Previous buttons will allow navigating through the collection pages.

The main view model, for instance, declares a paging member:

protected iWordPaging	_wordPaging	= new iWordPaging(200);

And assigns its Words collection to this paging member whenever the collection changes:

_wordPaging.SourceCollection	= ItemList?.AllWords;

 

Words as Tag Cloud

A ListView control should be customized for this.

We need to customize its ItemsPanel and ItemTemplate:

<ListView x:Name="listItems" 
			Grid.Row="1" 
			ItemsSource="{Binding WordPaging.CurrentPageData, IsAsync=True}" 
			BorderBrush="#FFA3A3A4" BorderThickness="1"
			SelectedItem="{Binding SelectedWord, Mode=TwoWay, IsAsync=True}" Background="{x:Null}"
			Padding="12" ScrollViewer.HorizontalScrollBarVisibility="Disabled"
			>
   <ListBox.ItemsPanel>
       <ItemsPanelTemplate>
          <WrapPanel MaxWidth="{Binding ElementName=listItems, Path=ActualWidth, Converter={StaticResource widthConverter}}" HorizontalAlignment="Left" Height="auto" Margin="12,0,12,0" />
       </ItemsPanelTemplate>
   </ListBox.ItemsPanel>

   <ListView.ItemTemplate>
       <DataTemplate>
          <local:iWordCtrl DataContext="{Binding }" Width="120" Height="40" />
       </DataTemplate>
    </ListView.ItemTemplate>
  </ListView>
 

Data items grid

A DataGrid bound to the selected Word data items will display its (paged) data items:

<DataGrid ItemsSource="{Binding CurrentPageData, IsAsync=True}">
   <DataGrid.Columns>
    …
    …


 

With this in place, we are now able to:

  • Navigate though word pages
  • When the selected word changes, its related data (paged) items (e-books) are displayed in the DataGrid…
  • Next / Previous buttons can be used to navigate, and will be enabled or disabled according to the paging context (see paging base class in the diagram above)
  • A list of pages (combo box) can also allow to go to a specific page

 

trie-with-data-paged-word-cloud

 

Sample paged datagrid of e-books containing the selected word

trie-with-data-paged-datagrid

In a next post, we will see the database integration process

doc5ync Trie index integration tool

That is a maintenance WPF client application for indexing words found in e-book titles and descriptions for the doc5ync project. (http://doc5.5ync.net/)

Before talking about technical details, let us start by some significant screenshots of the app.

1. scanning All languages’ words for 10000 data records with minimum words length of 4 chars.

trie-with-data-window1

After the scan, words are displayed (on the left side of the above figure) highlighting the occurrences of each word (greater font size = more occurrences). This done using a user control itself using a converter.

trie-with-data-word-control

A simple Border enclosing a TextBlock

<UserControl.Resources>
	<conv:TrieWordFontSizeConverter x:Key="fontSizeConverter" />
</UserControl.Resources>
<Grid x:Name="grid_main">
	<Border BorderBrush="DarkGray" CornerRadius="2" Background="#FFEDF0ED" Height="auto" Margin="2" BorderThickness="1">
		<TextBlock Text="{Binding Word}"
					Padding="4px"
					VerticalAlignment="Center"
					HorizontalAlignment="Center"
					FontSize="{Binding ., Converter={StaticResource fontSizeConverter}, FallbackValue=12}"
					>
		</TextBlock>
	</Border>

	</Grid>
 
 
The converter emphasizes the font size relative to the word’s occurrences:
 
public class TrieWordFontSizeConverter : IValueConverter
{
    public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
    {
        double minFontSize = 11.0,
              defaultFontSize = 12.0,
              maxFontSize = 32.0,
              size;
         if (System.ComponentModel.DesignerProperties.GetIsInDesignMode(new DependencyObject()))
            return defaultFontSize;

         double min = (double) iWordsCentral.Instance.MinOccurrences,
                max = (double) iWordsCentral.Instance.MaxOccurrences;
         iTrieWord word = value as iTrieWord;

         if( word == null)
            return defaultFontSize;

         max = Math.Min(9, max);
        size = (word.Occurrences / max) * maxFontSize;

        if(size > maxFontSize)
            return maxFontSize;
        if(size < minFontSize)
            return minFontSize;
        return size;
}

Load, Scan and link words to data items

The View Model objects and processing flow

trie-with-data-view-model

iWordsCentral is the ‘main’ view model (singleton) which provide word scanning and data object assignment through its ScanWordsData (iData object)

ItemList (iDataItemList) is iData’s member responsible for building the Trie (its member) and assigning Trie’s words to its data items.

On Load button click, the MainWindow calls its LoadData() method.

 
async void ReloadData()
{
	await Task.Run(() => iWordsCentral.Instance.LoadData());
}
 

The method loads data records into (the desired number of records is a parameter… see main figure) and assign it to the ItemsList of the scan object (iData), then calls the iData’s method to build the Trie and assign data items to each of the Trie’s nodes.

 

_scanWordsData.ItemList	= rootList;

bool scanWordsResult = await _scanWordsData.ScanDataWordsAsync(_minWordLength, _includeDocAreaWords, _cancelSource.Token);

 

The iData object calls its ItemList to do the job… its method proceeds as in the following code

public async Task<bool> ScanDataWordsAsyn(int minWordLength, bool scanRootItems, CancellationToken cancelToken)
{
    if(_trie == null)
        _trie = new iTrie();

    // build a single string with all textual items and parse its words
	iTrie		trie			= _trie;
	string		global_string	= "";

        foreach( iDataItem item in this)
        global_string	+= item.StringToParse;
        await Task.Run(() => _trie.LoadFromStringAsync(global_string, minWordLength, notifyChanges: false));

        _trie.Sort();
        List<iTrieWord>	trieWordList	= trie.AllStrings;

        // copy the Trie words (strings) to a DataTrieWord list
	CopyDataWords(trieWordList);

        // assign words to data items
        bool result = await AssignTrieWordsDataAsync(scanRootItems, cancelToken);
	return result;
}


 

The data Item List loops through all its words and data items, calling each data item to assign itself to the given word if it is contained in its data

foreach (var word in _dataWordList)
{
   foreach (var ditem in this)
      await ditem.AssignChildrenTrieWordAsyn(scanRootItems, word, cancelToken);
}

The data item looks for any of its data where a match of the given word is found and assign those items to the word:

var wordItems	= this.Children.Where( i => i.Description != null 

				&& i.Description.IndexOfWholeWord( word.Word) >= 0);

IndexOfWoleWord note

That is (an efficient) string extension which is important to ensure that one whole word is present in data. I struggled to find a solution for this question, and finally found an awesome solution proposed by https://stackoverflow.com/users/337327/palota

 

// credit: https://stackoverflow.com/users/337327/palota
public static int IndexOfWholeWord(this string str, string word)
{
    for (int j = 0; j < str.Length && 
        (j = str.IndexOf(word, j, StringComparison.Ordinal)) >= 0; j++)
        if ((j == 0 || !char.IsLetterOrDigit(str, j - 1)) && 
            (j + word.Length == str.Length || !char.IsLetterOrDigit(str, j + word.Length)))
            return j;
    return -1;
}
 

Finally, as you may have noticed, for performance measurement, a simple StopWatch is embedded into the main view model to notify elapsed time during the process. For this to have sense, all methods are of course async notifying changes through the UI thread (Dispatcher). You might ignore all the async artifacts in the above code to better concentrate on the processing steps themselves.

Presentation

Once all the processing is done, there is still the presentation UI work to do in order to display the document list of a selected word.

This will be the subject of a next post.

 

doc5ync – the Trie in practice for online e-books

I spent the past few months working on a new web project referencing online e-books (http://doc5.5ync.net/)

The goal of the project was not to build a new online library (many good libraries are already out there) but rather to offer a central reference for all what exists, adding some features for these references to provide a new analytical view of e-books.

Most of online libraries offer access to books that are now in the ‘public domain’ (I.e. no more copyright protected) and thus available for free download.

For an analytical approach, I started to use the Trie structure (I talked about this in a previous post) for analyzing textual elements of the referenced e-books to provide relational aspects among them.

Just a reminder, explained in the previous post: a Trie is a tree-like structure where a node has a parent, neighbors and descendants. The structure is particularly interesting for text indexing because, whatever the language, any textual unit (word) is forcibly composed of a set of that language’s alphabet (whose number is quite limited). Adding a flag to end-of-word nodes, we can build a Trie whose root is composed of the few units of the alphabet with branches to text words.

trie-word-nodes

This compact structure enables fast and efficient search and retrieve elements into large text sequences. Which seems to be a good base for our e-book text indexing and analysis.

Using the trie structure to index e-book details (titles, description, author…) of the relatively large number of referenced e-books (approx. 9000 as of writing) was straightforward and efficient.

Now, a given unit (word) in this trie might be related to one or more of our e-books. How to link our trie nodes each to its set of ‘data’? That is the subject of this brief post.

We are going to build upon the elements mentioned in in the previous post:

  • We will use our Trie with its (char) Dictionary and Nodes.
  • Our trie provides us with its words presented as a collection of iTrieWord objects
  • Let us create a new object iTrieDataWord (deriving from iTrieWord)
  • This last object will contain a collection of ‘Data items’ (in our concrete case, this will be a collection of e-books)

trie-with-data

How to proceed?

After some experimentations Smile, I ended up using the following steps which seemed to be good in regards of efficiency and performance:

  • Load all e-books’ textual sequences (titles, descriptions, author information… for the time being)
  • Build the Trie of this text sequences (more about this later)… which provides us with its Words (iTrieWord) collection
  • Now, in the loaded collection of e-book records (the iDataItem(s)). (Each record contains the e-book title, description and author information)… each record (iDataItem) can assign itself to any of the Trie words whenever that word is part of its own data.

Some additional considerations in the process are quite important:

    • One important point is to define “What is a ‘Word’”?  in terms of minimum number of characters to consider a sequence as a ‘word’. As the referenced e-books are multilingual, it was somehow clear that this threshold is language-dependent. In Arabic, for instance, words tend to be short in terms of number of characters (Arabic vowels are often part of the character). After some research, I found that considering 4 chars as a minimum is an acceptable compromise as it allows searching the e-books by year (author’s or book’s) which may be quite useful.
    • It is also important to define what are ‘word-delimiters’ (spaces are not the only ones to consider!). Actually, that is also language-dependent in some ways… and as such requires experimentations with all languages to be used in the given project.
    • Finally: what are we going to do for all this to b useful?... I.e. Are we going to persist this Trie? Or rather proceed as a (runtime queryable) indexing service?… etc. For doc5 project, we decided to persist the results in data tables / running the scan process periodically

Some performance numbers

Some numbers to justify using the above steps:

  • Reading data records + Building a Trie of 40365 words (min = 4 chars): 17s
  • Processing 9000 e-book information (I.e. building the Trie + creating 358000 links to its words): 8min30s

Will post some sample code in the coming weeks. You may have a look at http://doc5.5ync.net/ (The current version for presenting the results).

A bit late!: Wish you all a happy 2020 year, with many useful projects and much fun!

Jet.oledb.4.0 and utf8 bom story

As you may know, a text file may specify its encoding by a ‘bom’ (byte order mark) using several bytes at its beginning. For utf8 encoding the signature bytes are: 0xef, 0xbb, 0xbf.

I came across an issue while manipulating csv files, for which I decided to use utf8 (thinking that was a good choice for a multi-cultural environment!). The process involved reading and writing back to the same file after some insertions and updates. All using microsoft.jet.oledb.4.0 provider (with a schema.ini specifying CharacterSet=65001 (65001 being utf8 code page)).

My csv files had a header row of which the first column was, ironically, named ‘Match ID’. A few manipulations revealed a somehow strange behavior. Although the debugger showed that the first column’s name is ‘Match ID’, I could no more access ‘Match ID’ column by its name. Using the watch window, I asked the debugger:
myColumn.ColumnName == "Match ID"… it replied false… weird!

Viewing the column name's CharArray in hexa offers a more significant information:

 column name issue

That is evidently endless! With the time going, as you manipulate your columns and rewrite back to the csv file, you end up by having your 'Match ID' column prefixed by bytes from the utf8 bom code as many times you rewrite the csv file. And if you are a nice guy who lets the users reorder the columns as they need, you may end up by having all your columns affected by that issue!

column name bytes

Changing files’ encoding to unicode (whose bom signature is 0xff 0xfe 0xff 0xfe) does not reproduce the issue. Which makes it clear that the source of annoyance is not utf8 but rather the jet.oledb.4 data provider with utf8. Still, identifying the source is half way of solving the issue :).

How to solve this?

Well, you may think of ‘sanitizing’ your column names at every load! Which, in my view, does not seem quite practical.
In my case I just switched to unicode (despite more bytes waste!) to preserve multi-cultural data requirements.

xsl witness!

Transforming xml content through xsl stylesheets is a useful and relatively common feature in the development process. I talked about this in the previous post about OneNote pages html preview.
Searching in my personal code toolbox, I just found this ‘iXslWitness’, a tool I wrote a couple of years ago to check the effectiveness of a stylesheet in transforming xml to html. Its usage is quite simple: you select an xml file and the xsl stylesheet to use. And you get the html transformed content.
I hope that can be useful for anyone involved in such tasks!
A screenshot of transforming a OneNote page xml content (a list of ‘The World If’ publications of The Economist newspaper):

worldif2017-witness

You can download the tool Here.
The source code is Here.