Taoffi's blog

prisonniers du temps

doc5ync – word index web page presentation!

Objectives:

  • Display a cloud of index words, each dimensioned relative to its occurrences in e-book information (e-book title, description, author, editor… etc.)
  • On selection of a given word: display the list of e-book references related to the selected word
  • On selection of an e-book: display its information details and all words linked to it

Context:

doc5ync web interface is based on a meta-model engine (simpleSite, currently being renamed to web5ync!).

I talked about meta-models in a past post Here, with some posts about its potential applications here.

The basic concept of meta-models is to describe an object by its set of properties and enable the user to act on these properties by modifying their values in the meta-model database. On runtime, those property values are assigned to each defined object.

In our case, for instance, we have a meta-model describing web page elements, and a meta-model describing the dataset of word index and their related e-books.

For web page elements, the approach considers a web page as a set of html tags (i.e. <div>, <table><tr><td>…, <img… etc.). Where each tag has a set of properties (style, and other attributes) for which you can define the desired values. On runtime, your meta-model-defined web page comes to life by loading its html tags, assigning to each the defined values and injecting the output of the process to the web response.

A dataset is similarly considered as a set of rows (obtained through a data source), each composed of data cells containing values. Data cells can then be either presented and manipulated through web elements (html tags, above) or otherwise manipulated through web services.

Data storage and relationships

As we mentioned in the previous post, index words and their related e-books are stored in database tables as illustrated in the following figure:

Each word of the index provides us with its number of occurrences in e-book text sequences (known on Trie scan).

Html formatting using a SQL view

To reflect this information into a presentation, we used a view to format a html div element for each word relative to its number of occurrences. The query looks like the following code

select
  w.id            as word_id
 , w.n_occurs
 , N'<div style="BASIC STYLE STRING HERE…; display:inline;'

-- add the font-size style relative to number of occurrences
+
case
    when w.n_occurs between 0 and 2    then N' font-size:10pt;'
    when w.n_occurs between 3 and 8    then N' font-size:14pt;'
    when w.n_occurs between 9 and 15    then N' font-size:16pt;'
    when w.n_occurs between 16 and 24    then N' font-size:22pt;'
    when w.n_occurs between 25 and 2147483647    then N' font-size:26pt; '
end

+ N'"'
-- add whatever html attributes we need (hover/click…)
+
N' id="div' + convert(nvarchar(32), w.id)
+ N'" onclick="select_data_cell(''' + convert(nvarchar(32), w.id) + N''');" '
     as word_string_html

-- add other columns if needed
from dbo.doc5_trie_words w
order by w.word_string

 

The above view code provides us with html-preformatted string for each word index in the data row.

Tweaking the data rows into a cloud of words

On a web page, a dataset is commonly displayed as a grid (table / columns / rows), and web5ync knows how to read a data source, and output its rows into that form. But that did not seem to be convenient in our case, because it simply displays index words each on a row which is not really the presentation we are looking for!

 

To resolve this, we simply need to change the dataset web container from a <table> (and its containing rows / cells) into <div> tags (with style=display: inline).

Here a sample of html code of the above presentation:

<table>
  <tr>
    <td>
    <div style="font-size:16pt;" onclick="select_data_cell('29008');">After</div>
  </td>
</tr>
<tr>
  <td>
    <div style="font-size:26pt; onclick="select_data_cell('28526');">after</div>
  </td>
</tr>

<!-- the table rows go on... --> 


And here is a sample html code of the presentation we are looking for:

<div style="display:inline;">
    <div id="td_word_string_html82" style="display:inline;">
       <div style="font-size:26pt;" onclick="select_data_cell('28526');">after</div>
      </div>
</div>

<div style="display:inline;">
    <div id="td_word_string_html83">
      <div style="font-size:16pt;" onclick="select_data_cell('29008');">After</div>
    </div>
</div>

<div style="display:inline;">
   <div style="display:inline;">
      <div style="font-size:10pt;" onclick="select_data_cell('17657');">AFTER</div>
    </div>
</div>

 

Which looks closer to what we want:

Interacting with index words

The second part of our task is to allow the user to interact with the index words: clicking a word = display its related e-books, clicking an e-book = display the e-book details + display index words specifically linked to that e-book.

For this, we are going to use a few of the convenient features of web5ync, namely: Master/details data binding, and Tabs. (I will write more about these features in a future post)

Web5ync master/details binding allows linking a subset of data to a selected item in the master section. Basically, each data section is an iframe. The event of selecting a data row in one iframe can update the document source of one or more iframes. All what we need is: 1. define a column that will be used as the row's id, and 2. define how the value of that id should be passed to the target iframe (typically: url parameter name).

Tabs are convenient in our case as they will allow distributing the information in several areas while optimizing web page space usage.

In the figure above, we have 3 main data tabs: n Explore by words, n Document info and n Selected document words.

On the first tab:

  • clicking a word (in the upper iframe) should display the list of its related e-books (in lower iframe of that same tab)
  • clicking an e-book row on the lower iframe should: first displays its details (an iframe on the second tab), and display all words directly linked to the selected e-book (an iframe in the 3rd tab). (figures below)

In that last tab, we can play once again with the displayed words, to show other documents sharing one of them:

Xamarin forms backyard - exploring MSBuild projects

In a previous post, I talked about how to manually change the project (.csproj) file to create an App.xaml and App.xaml.cs files in a Xamarin forms project. The reason for this is to be able to declare and use global application resources (styles, assets… etc.) linked to the main application file (App.cs changed to App.xaml.cs). That is, in a way, to repair a guilty XF project template that doesn't create these items by default.

Like in all development processes, when you use xamarin forms, you spend your time coding and building your projects.

In Windows, this essential build process goes through MSBuild which reads and interprets .csproj file's instructions into specific actions to compile and package your applications.

Those .csproj files are XML files containing configurations, variables and instructions… many can be manipulated in Visual Studio or Xamarin Studio. Still, some of these configurations require a deeper dive into MSBuild details. Your project, for instance, may sometimes fail to build for reasons that are difficult to trace without such dive.

Diving into MSBuild is something different from simply reading the .csproj xml instructions.

Understanding the way these instructions would be interpreted and handled by MSBuild is crucial for your investigations success.

MSBuild API to the rescue

Fortunately, MSBuild offers an API that allows you to explore or modify the build engine behavior according to your requirements.

I used this API to build a simple MSBuild browser (Windows WPF application) that allows you to examine the detailed interpretations of MSBuild to a .csproj or .vbproj file.

A project configuration browser

The browser uses the objects defined in the MSBuild Microsoft.Build.Evaluation namespace.

Three main view models of the application are used to represent MSBuild objects:

  • iProject: msbuild project view model. Encapsulates the collection of:
  • iProjectItemType: project item types (and actions). Each project item type encapsulating a collection of:
  • iProjectItem: an item of the related type. This object encapsulates a property bag of the related MSBuild item's properties (For instance: metadata).

 

PropertyBag object is used to allow future application extensibility. It parses MSBuild objects' properties independently of the version referenced (the current application references MSBuild 14.0.0.0).

 

The result looks like this:

 

You can download the binaries Here

If you are interested in extending features, you can download the source code here.

 

Meta-models: towards a universal dependency injection framework

Intro

I joined an interesting presentation today about unit tests. Part of the presentation was related to dependency injection usage for unit testing. That brought to my mind again the benefits of meta-models.

The problem

Unit testing often faces the problem of having to instantiate objects that may impact the test itself or simply make the test impossible.

A sound example of this is when the method to be tested requires the instantiation of an object requiring, for instance, a database connection.

In this schema, we have 3 actors

  • The function
  • The caller
  • The object (SomeObject) which cannot be instantiated.

 

Traditional recipe

One, now traditional, recipe is to use an Interface (instead of the specific object) as a function parameter. And, according to the context, instantiate and use an object that implements the defined interface. The figure below illustrates this.

 

Good solution… a little acrobatic, but OK… good.

 

A problem, though. We now have 6 actors (to code and maintain):

  • The Function
  • The caller
  • The original 'true' object (SomeObject)
  • The Interface
  • The interface implementations (=2)!

 

On the other hand, testing a function may also need to instantiate a 'true' object context. In which case, we would have to rewrite (and recompileJ) our test code to instantiate a 'true' implementation when required.

A third point: in this solution, we defined the Interface to represent an abstraction of the original object. What if the original object itself was, for instance, a database record? This would then introduce a new actor in our playgroundJ

 

As Bjarne Stroustrup puts it:

"Any verbose and tedious solution is error-prone because programmers get bored" J

 

Meta-models may be a better way

In the current case, meta-models (see my previous posts) can be used to more simply describe all actors (object structures / methods / properties…).

An illustration:

 

 

Reminder: meta-models are closely related and enforced by Reflection (assemblies' meta-data). Through meta-models AND Reflection (namespace System.Reflection) developers can gain more flexibility to create versatile and scalable software.

 

In our present case, meta-models not only expose less actors to code and maintain, but also lower the complexity level of involved actors (for instance: maintaining database records instead of hard coding implementation).

 

The main goal for this approach is to ultimately write fewer universal methods to invoke and instantiate any object!

Will try to deliver a code sample in a future post. In the meantime, you may download meta-model sample code here!

A dive into the undocumented TFS meta-model – Part II

 

In a previous post, I started exploring the TFS configuration database. Let's continue here by having a look at some more configuration objects and relationships.

TFS Config security objects

 

Referenced table

Primary column

Table

Foreign column

tbl_security_identity_cache

tf_id

tbl_gss_group_membership

member_id

tbl_security_identity_cache

tf_id

tbl_security_domain_groups

group_id

tbl_security_identity_cache

tf_id

tbl_security_membership_cache

container_id

tbl_security_identity_cache

tf_id

tbl_security_membership_cache

member_id

tbl_gss_groups

tf_id

tbl_gss_group_membership

parent_group_id

tbl_security_domain

domain_number

tbl_security_domain_groups

domain_number

tbl_security_domain

domain_number

tbl_security_projects

domain_number

tbl_security_identity_type

type_id (int)

tbl_security_identity_cache

type (tinyint)

The relationship diagram above suggests that we can query user logins, group-membership for specific projects by a query like the following:


SELECT TOP (100) PERCENT
    login.display_name  AS user_name,
    proj.scope_name     AS project,
    login_grp.display_name AS user_group

FROM dbo.tbl_security_membership_cache AS membership INNER JOIN
  dbo.tbl_security_identity_cache AS
login ON membership.member_id = login.tf_id INNER JOIN
  dbo.tbl_security_projects AS proj INNER JOIN
     dbo.tbl_security_domain AS domain INNER JOIN
       dbo.tbl_security_domain_groups AS grp ON domain.domain_number = grp.domain_number
       ON proj.domain_number = domain.domain_number
     ON
membership.container_id = grp.group_id INNER JOIN
  dbo.tbl_security_identity_cache AS
login_grp ON grp.group_id = login_grp.tf_id

ORDER
BY user_name, project

 

The above query may give us results similar to the following

user_name

project

user_group

Administrator

Research education "agile

Administrators

Administrator

Research education "agile

[TEAM FOUNDATION]\Team Foundation Service Accounts

Administrator

SourceSafeProjects

[TEAM FOUNDATION]\Team Foundation Service Accounts

Administrator

SourceSafeProjects

Administrators

Administrator

DefaultCollection

[TEAM FOUNDATION]\Team Foundation Service Accounts

Administrator

DefaultCollection

Administrators

 

Meta models… what is it and how can it be useful (for you)

 

I spent a long time during the last months writing about meta-models. That was in the context of a book for Editions ENI (France) about some concepts to which I dedicated some work since quite a while (it was around 1998, as long as I remember, that I started studying the basis of the approach).

 

In previous posts (see, for instance, Silverlight database to DataGrid, yet another approach- Part II) I exposed a generic approach for describing database meta-data. The direct practical purpose of those posts was reading and displaying data into Silverlight DataGrid. Through this exercise, we also saw how can our structures help us in integrating business rules at several level (meta-column, meta-table… and so on).

 

In this same exercise, applying the meta-models approach would consist of persisting these structures into the database… to read and create them dynamically.

 

We would, for instance, store our meta-columns / meta-tables definitions into the database and let our software objects load these structures and transform them into ‘live’ data, integrating all defined business and technical constraints.

 

 

 

What are meta-models?

 

I use the term "meta-model" to designate "a description of a description" stored in a database and interpreted (read and presented) by one or more (specific) software objects (classes).

 

That is: the information stored in the database describes an object or a behavior, and the software class that reads this information is aware of the fundamental semantics of this information and is, thus, able to interpret it independently of a particular context.

 

This means that, according to our view, a meta-model is composed of:

 

§  Database objects and structures to store the model's data;

 

§  Software (kernel) objects that know the fundamental semantics of the database model.

 

 

 

The collaborative aspect between the 'description' stored in the database and the 'runtime object' (which reads and interprets that description) is essential for this approach to be applicable.

 

 

 

Why use meta-models?

 

Let us take a sample application where the software should display a page (or form) composed of a Master / Detail connected regions… i.e. a list of items in a Master grid, with a Detail form that displays the details of the item selected in the Master grid.

 

That 'Master/Detail’ layout may be the software application's approach for presenting the information of several entities. The application developer may choose to create a specific page for each manipulated entity… a choice which may be correct in some contexts, but which also remains questionable by its repetitive approach.

 

 

 

In another, more automated, approach, the software developer may prepare templates for each entity's specific Master/Detail information, and load the required template according to the context's entity.

 

This latest approach would be a first step for a meta-modeling approach for entities' presentation.

 

 

 

In fact, storing these templates as part of software objects would make them dependent of their related objects' compiled state. That is: the template cannot evolve without changing and recompiling related objects' code. Storing template's information in a database (meta-model) would circumvent this inconvenience.

 

In the same time, storing templates' information in a database would also require a (more or less important) change in software object's structure and behavior. Using a database meta-model approach (to describe Master/Detail templates in our case), requires the software object to act in a more abstract way. And this is what we called 'collaborative' approach (between database meta-model and runtime objects).

 

 

 

Finally, in the long run, our software object would constitute a 'kernel object' capable of displaying and handling any entity's Master/Detail information stored in a database meta-model.

 

 

 

What can we do with meta-models?

 

The sample exposed above can of course be extended to more elaborate models and interactions between descriptions stored in a database and their software objects' counterpart.

 

Let us again take another practical sample: an application that may need to normalize the appearance of asp.net data grid controls.

 

We know that a GridView control contains some appearance properties like:

 

§  Width;

 

§  HeaderStyle-CssClass;

 

§  AlternatingRowStyle-BackColor;

 

§  … etc.

 

 

 

We may choose to use the database to store values for each of these properties and let our software objects assign these values to each DataGrid (on the page's load event for instance).

 

Now, to handle the appearance of our GridView controls, we just need to assign new values for these properties in the database. A task which requires less effort, less time and fewer technical skills.

 

 

 

Meta-models and Reflection

 

Reflection is the process of introspection (self-examining) of the internal structures of software objects at runtime. .NET implements Reflection in some classes defined in the System.Reflection namespace.

 

 

 

Our previous exercise can be extended to more abstract and useful applications.

 

In fact, a software object (like GridView in the last sample) has properties to which we can dynamically assign values if we may simply know their name:

 

 

 

 

PropertyInfo property = Type.GetProperties().FirstOrDefault(

                            p => string.Compare( p.Name, my_name) == 0);

 

 

 

 

After locating the property inside the specified object's Type (class) we may assign it a specified value:

 

 

 

 

property.SetValue( … )

 

 

 

 

Our meta-model may make use of Reflection mechanisms to store property names and values in the database, to read and assign the specified values to the specified objects at runtime.

 

 

 

This approach may be of interest in many software applications. Its main advantages are, again:

 

§  More adaptable and versatile software;

 

§  Reducing development and maintenance efforts, time and cost!

 

 

 

Meta-models and method invocation

 

What we explained above about using System.Reflection to discover an object's property by name and assign a value to the retrieved property, is also applicable to methods:

 

 

 

 

MethodInfo method = Type.GetMethods().FirstOrDefault(

                            m => string.Compare( m.Name, my_name) == 0);

 

method.Invoke( … )

 

 

 

 

On the other side, user interface’s command elements like Menus, buttons… are often associated with an event handler that represents the entry point for the action of the command.

 

 

 

To explain how a meta-model can be useful in this area, let us take an example of an update button.

 

An update button collects the user entries into the form fields, checks their integrity and submits them to the server for updating the specified table;

 

Having the meta-data and business rule constraints for the columns of the entity, this update operation can be handled through one same handler. Or be handled by a dynamically selected handler specific to the context.

 

So, we know that a data entry form would have an update button. And we may need to assign a specific method as the handler of the Click event for this button.

 

All these required information elements can be stored into a database meta-model. The meta-model software would then:

 

§  Read (or otherwise find) the name of the method assigned as handler of the button's action event;

 

§  Dynamically locate the method in the software (loading the assembly when required);

 

§  Invoke the method.

 

 

 

Again, this meta-model solution can be applied on different contexts (records submission, web service operations invocation… etc.) and/or on different objects or controls (buttons, menus… etc.).

 

 

 

Interoperability and Integration

 

The database meta-model approach may also be used in a way similar to what we already use in Interop assemblies (which create links between 'unmanaged / native' code end 'managed code').

 

A meta-model-aware assembly method may, for instance, prepare entries to methods that reside in non-meta-model-aware assemblies or methods.

 

Such methods can also be defined in different projects independent of the 'meta-model-kernel' assembly. And may, thus, allow a high level integration of existing applications (or data) into the meta-model solution.

 

 

 

Extensibility

 

As we said above, the meta-model as exposed in this paper, is a collaborative ensemble where the database descriptions are tightly connected to the meta-model kernel software.

 

Two more questions though:

 

§  What about the evolution of the meta-model's own kernel?

 

§  How to describe objects that we don't yet know about?

 

 

 

Here comes another important point which seems to represent a good basis for answering the above questions:

 

§  An object is represented by a set of properties;

 

§  A property is, essentially, a definition composed of

 

·         A Name

 

·         And an Attribute (the Attribute defines property's constraints: mainly 'Data Type');

 

§  A Value can be assigned to a property in the context of either a class (static property) or to an instance of this class. (Note: Value should conform to the attribute's constraints).

 

 

 

So we may consider the data of an object as a collection of (Property / Value)… i.e. a collection of ((Name/attribute)/Value).

 

This seems to be, more or less easily, presented in a meta-model, which may allow us to handle future 'unknown objects' or future unknown 'properties'.

 

 

 

Meta-models self-describing

 

On one hand, meta-models are data stored in a database… and on the other, they are extensible. Managing them can also be described by meta-models. This recursive aspect can be of interest to some applications to allow users to autonomously manage software semantic behaviors without the need of extensive technical knowledge.