Tips & tricks for installing and running ICS products

VMWare Workstation command line

Tom Bosmans  18 May 2018 10:11:38
Get a list of running virtual machines
vmrun list

Use that output to get the ip address of that guest.
vmrun getGuestIPAddress  /run/media/tbosmans/fast/Connections_6_ICEC/Connections_6.vmx

(note that this particular call is pretty buggy, and does not return the correct IP address if you have multiple interfaces : ...  still, it can be pretty useful)

You can run any command in the guest, as long as you authenticate properly (-gu -gp )

So for instance, this command lists all running processes, and you can use the output to actually do something with these processes in a next step (eg. kill them)

vmrun -gu root -gp listProcessesInGuest  /run/media/tbosmans/fast/Connections_6_ICEC/Connections_6.vmx

You can also run any command using that mechanism.

IBM Connections Communities Replay events DB2 queries

Tom Bosmans  23 January 2018 17:33:25
Working on a recent problem where events are not processed, I was looking at the wsadmin commands to provide information.
The Jython code supplied , for instance for CommunitiesQEventService.viewQueuedEventsByRemoteAppDefId("Blog", None, 100) is pretty useless in situations where you have 100's of 1000's of events in the queue.  The Jython code in the wiki is also plain wrong (but that's a differentl story)

So I turned to the DB2 database, to examine the LC_EVENT_REPLAY table.  Unfortunately, the interesting detailed infomration is stored as an XML in a field (CLOB) called EVENT.
It took me quite a bit of time to figure out how to get the information out of that field in an SQL Query.

In fact, the most puzzling fact, was the notation needed for the XML root element and the node elements.  They all need to use the namespace.  Using a wildcard for the namespace , is sufficient in this case.
So this query would give you some detailed information about events in the replay table :

XMLTABLE( '$tev/*:entry'
    "title"          VARCHAR(512) PATH '*:title/text()',
        "author"        VARCHAR(128) PATH '*:author/*:email/text()',
        "communityid"        VARCHAR(128) PATH '*:container/@id',
        "community"        VARCHAR(128) PATH '*:container/@name'
    ) AS X
ORDER BY X."communityid";

Of course, you can show any information from the EVENT XML file you like, but using this query as a start, would help you immensely :-) .

Custom dynamic dns on Ubiquity router with

Tom Bosmans  5 January 2018 16:57:04

Ubiquity Edgerouter X

The Ubiquity Edgerouter X is a very cheap but very powerful router with a lot of options.  It's based on EdgeOS, which is a linux based distro.
That basically allows you to do "anything" you want.

I got it from Alternate ( , around 54 Euros....  

Dynamic DNS

I would like to finally setup a vpn solution, so I can safely access my systems from whereever.  My Edgerouter X has these capabilities, so I was looking for a way to set it up.

The first thing to do, is look for a Dynamic DNS provider.  In the past, I used (long, looong ago), but they don't offer dynamic dns services anymore as far as I can tell.
I looked a several free Dynamic DNS providers, but couldn't figure them out (it's probably me) .  

So I went looking what my 'real' dns provider has to offer (  .  It turns out, there is a dynamic dns service recently (27th december 2017) .

Dynamic DNS on

Really simple to do : the UI has a new section 'dynamic dns', where you add a new subdomain.  That subdomain is then listed in your regular subdomains.
I did seem to have problems when using longer passwords, but that may have been a differnt problem ...

More information :

Dynamic DNS configuration on Edgerouter


The Edgerouter uses a pretty standard ddclient package .  

Web UI

Through the web ui, the options are limited.  Specifically, the protocol, is limited to a subset of what ddclient has to offer, although the Service says "custom" ...

Image:Custom dynamic dns on Ubiquity router with
Bottomline, it doesn't work , and is not as "custom" as I would like.


The Edgerouter allows ssh access, I have configured it to use ssh keys for me .

There is a series of commands to configure the dynamic dns feature (like in the web ui), but although that offers a bit more options, it's still not sufficient.

Custom ddclient

Luckily, ddclient is just a simple perl script, so it's easy to modify.   The problem with the code is that it contains hardcoded elements (like the /update.php? part in the update part)
There's 3 sections to change :
- variables
- examples
- update code

I copied the code from the duckdns sections and adapted it.

Open ddclient with a text editor, as root (sudo su - ).  The ddclient file is here :


Add keysystems definitions at the end of the %services section (after woima, in my case) :

   'woima' => {
       'updateable' => undef,
       'update'     => \&nic_woima_update,
       'examples'   => \&nic_woima_examples,
       'variables'  => merge(
   'keysystems' => {
       'updateable' => undef,

       'update'     => \&nic_keysystems_update,

       'examples'   => \&nic_keysystems_examples,

       'variables'  => merge(





Add the variables to the %variables object  (somewhere at the end is fine):

'keysystems-common-defaults'       => {

                       'server'              => setv(T_FQDNP,  1, 0, 1, '', undef),

                       'login'               => setv(T_LOGIN,  0, 0, 0, 'unused',            undef),


Copy the example code and update code to he end of the file .

## nic_keysystems_examples
sub nic_keysystems_examples {
   return < o 'keysystems'

The 'keysystems' protocol is used by the non-free
dynamic DNS service offered by and
Check for API

Configuration variables applicable to the 'keysystems' protocol are:
 protocol=keysystems               ##
 server=www.fqdn.of.service   ## defaults to
 password=service-password    ## password (token) registered with the service         ## the host registered with the service.

Example ${program}.conf file entries:
 ## single host update
 protocol=keysystems,                                       \\
 password=prettypassword                    \\


## nic_keysystems_update
## by Tom Bosmans
## response contains "code 200" on succesfull completion
sub nic_keysystems_update {
   debug("\nnic_keysystems_update -------------------");

   ## update each configured host
   ## should improve to update in one pass
   foreach my $h (@_) {
       my $ip = delete $config{$h}{'wantip'};
       info("KEYSYSTEMS setting IP address to %s for %s", $ip, $h);
       verbose("UPDATE:","updating %s", $h);

       # Set the URL that we're going to to update
       my $url;
       $url  = "http://$config{$h}{'server'}/update.php";
       $url .= "?hostname=";
       $url .= $h;
       $url .= "&password=";
       $url .= $config{$h}{'password'};
       $url .= "&ip=";
       $url .= $ip;
       # Try to get URL
       my $reply = geturl(opt('proxy'), $url);

       # No response, declare as failed
       if (!defined($reply) || !$reply) {
           failed("KEYSYSTEMS updating %s: Could not connect to %s.", $h, $config{$h}{'server'});
       last if !header_ok($h, $reply);

       if ($reply =~ /code = 200/)
               $config{$h}{'ip'}     = $ip;
               $config{$h}{'mtime'}  = $now;
               $config{$h}{'status'} = 'good';
               success("updating %s: good: IP address set to %s", $h, $ip);
               $config{$h}{'status'} = 'failed';
               failed("updating %s: Server said: '$reply'", $h);

Save the file and restart the ddclient service.

sudo service ddclient restart

This just checks if the code is fine.   Now the configuraiton.

We need 2 files:


Note that you can generate the second file, by using the webui of Edgerouter, or the console commands .  The values in the webui or console command don't matter, you will delete everything anyway.
You need to edit these files as root (sudo su - )

/etc/ddclient.conf :

# Configuration file for ddclient generated by debconf
# /etc/ddclient.conf



The important variables here are the password , and the last line, your hostname you defined in the Domaindiscount24 web interface.

# autogenerated by on Fri Jan  5 12:58:19 UTC 2018
use=if, if=eth0


Save both files.

You can now force an update of the ddns, but issuing a EdgeOS command :

update dns dynamic interface eth0

You can put a tail on the messages log, to see the results :

tail -f /var/log/messages

The result should be something like this :

Jan  5 15:20:06 ubnt ddclient[10616]: SUCCESS:  updating good: IP address set to
Jan  5 16:39:02 ubnt ddclient[13381]: SUCCESS:  updating good: IP address set to

Of course, instead of editing the files directly on your router, you could actually copy them using scp .... and editing them on your own desktop machine .


Alas, no supportability.  EdgeOS updates will likely wipe the changes away.,
Also, using the webui or console to update the dynamic dns settings, will wreak havoc on the configuration.  I am working on getting the updates in Source forge (  / ), but don't hold your breath for these changes to make it all the way down to Ubiquity .
So the solution is not ideal, but it works for now ...

Trying out Domino data services with Chart.js

Tom Bosmans  4 December 2017 11:06:48
Domino Data Access Services have been around for a few years now, but I never actually used them myself.

Since I recently started to dabble in Ethereum mining, I was looking for a place to store my data and draw some graphs and the likes.  I first tried out LibreOffice Calc, but I couldn't find an easy way to automatically update it with data from a REST API.  
So I turned to good old Domino, being the grandpa of NoSQL databases (before it was cool).

The solution I came up with, retrieves multiple JSON streams from various sources, combines it into a single JSON , that is then uploaded into a Domino database (using Python).
To look at the data, I created a literal "SPA" (single page application) - I use a Page in Domino to run Javascript code , to retrieve the data , again in JSON format, and turn it into a nice graph (using charts.js) .
So I don't actually use any Domino code to display anything, Domino is simply used to store and manage the data.

This article consists of 2 parts :

  • loading of data into Domino using Python and REST services.
  • displaying data from Domino using the Domino Data Access Services and an open-source javascript library to display charts ( )

Python to Domino

Domino preparation

To use the Domino Data Access services in a database, you need to enable them

  • On the server
  • In the Database properties (Allow Domino Data Service)
  • In the View properties

Server configuration

Open the internet site document for the server/site you are interested in.
In the Configuration tab, scroll down to the "Domino Acces Services"  .  Enable "Data" here.

Note that you may want to verify the enabled methods as well - enable PUT if you plan to use the services that use PUT requests.
And if you're not use Internet Site documents yet, well, then I can't help you :-)

After modifying the Internet Site document, you need to restart the HTTP task on your Domino server.
Image:Trying out Domino data services with Chart.js

Database properties

In the Advanced properties, select "Views and Documents" for the "Allow Domino Data Service" option.
Image:Trying out Domino data services with Chart.js

View properties

Open the View properties. and on the second to last tab, enable "Allow Domino Data Service operations".
Image:Trying out Domino data services with Chart.js

There is no equivalent option in Forms.

Python code

Instead of figuring out how to load JSON data in a Notes agent or Xpages (which no doubt is possible, but seems a lot of work), I choose to use a simple Python script, that I kick of using a cron job. I run this code collocated with the Domino server, but that is not necessary .  Because the POST requires authentication, and the url used it using TLS, this could just as well run anywhere else.
Any other server-side code would do the same thing , so Node.js or Perl or ... are all valid options.

There's 2 JSON  objects being retrieved :

resultseth = requests.get('{wallet}&email={email address}')
data = resultseth.json()


currentprice = requests.get(',USD,EUR')
pricedata = currentprice.json()

The first JSON that's returned , contains nested data (the workers object) .

"autopayout_from": "1.0",
"earning_24_hours": "0.1123",
"error": false,
"immature_earning": 0.000890178102,
"last_payment_amount": "1.0",
"last_payment_date": "Thu, 16 Nov 2017 16:24:01 GMT",
"last_share_date": "Mon, 04 Dec 2017 12:41:33 GMT",
"payout_daily": true,
"payout_request": false,
"total_hashrate": 30,
"total_hashrate_calculated": 31,
"transferring_to_balance": 0.0155,
"wallet": "0x5ac81ec3457a71dda2af0e15688d04da9a98df3c",
"wallet_balance": "5411",
"workers": {
"worker1": {
"alive": true,
"hashrate": 15,
"hashrate_below_threshold": false,
"hashrate_calculated": 16,
"last_submit": "Mon, 04 Dec 2017 12:38:42 GMT",
"second_since_submit": 587,
"worker": "worker1"

"worker2": {
"alive": true,
"hashrate": 15,
"hashrate_below_threshold": false,
"hashrate_calculated": 16,
"last_submit": "Mon, 04 Dec 2017 11:38:42 GMT",
"second_since_submit": 111,
"worker": "worker2"

It turns out that Domino does not like that very much or rather cannot handle nested JSON, but there is a simple solution - flatten the JSON.

This uses the "flatten_json" class in Python, so it easy to use.  

In the sample above, it would translate

{ "workers":
{ "worker1":
  "worker": "worker1"


{workers_worker1_worker: "worker1"}

(Information about this particular API , is here )

The flatten_json, can be installed using pip

pip install flatten_json

From a public API , I can get the current price of ETH expressed in EUR, dollars and Bitcoin.

In Python, I now have 2 dictionary objects , with the JSON data (key - value pairs)
I combine them into a a single one, by adding the data of the 2nd dictionary to the first.

for lines in pricedata:
   data[lines] = pricedata[lines]

The nice thing about this Python classes, is that it allows to dynamically edit the JSON before submitting it again.  I could remove the data I don't want, for instance.
In this case, I need to do something about the boolean values that get returned by the Dwarfpool API, because Domino Data Access Services does not like them!

for lines in data:
   print lines,data[lines]        
   if data[lines] ==  True:
           data[lines] = "True"
   if data[lines] == False:
           data[lines] = "False"

The next step is to post the JSON file to Domino.
It's very straightforward : The url used will create a new Notes document, based on the Form named "Data" .  ( )

The Domino Form needs to exist of course, but it's not very important that the fields are on there .  

url = ''

There's some headers to set, in particular "Content-Type" must be set , to "application/json"

To Authenticate, I use a Basic Authentication header .  In this case, the user I authenticate with , only has Depositor access to the Database (which is the first time in 20 years of Domino experience  , I see the point in having this role in an ACL :-)  )

The service responds with an HTTP Code 201, if everything went correctly .  This is of course something you can work with (if the response code does not match 201, do something to notify the administrator, for instance) .

The full script:

# retrieves dwarfpool data for my wallet
# retrieves current price ETH
# merges the 2 in a flattened JSON
# uploads the JSON into a Domino database using the domino rest api
import requests
import json
from flatten_json import flatten

resultseth = requests.get('<wallet>&email=<email address>')
data = resultseth.json()
print "-----------------"

# retrieve eth price
currentprice = requests.get(',USD,EUR')
pricedata = currentprice.json()

print "------------------"
data = flatten(data)

# merge json data
for lines in pricedata:
   data[lines] = pricedata[lines]

for lines in data:
   print lines,data[lines]        
   if data[lines] ==  True:
           data[lines] = "True"
   if data[lines] == False:
           data[lines] = "False"

url = ''
myheaders = {'Content-Type': 'application/json'}
authentication = ("<Depositor userid>", "<password>")
response =, data=json.dumps(data), headers=myheaders, auth=authentication)
print response.status_code

Lessons learned

  • The Domino DAS are fast and easy to use , from Python .
  • The Domino Data Access Services POST requests do not handle nested JSON, so you need to first massage your JSON into a flat format .
  • The Domino DAS is pretty picky about the types - it does not support Boolean values (true/false)
  • Finally, I have seen a good use of the Depositor role in action !

Chart.js and Domino

Now the data is in Domino, and we can start thinking about

The Single Page Application

I created a Page in Domino, and put all HTML and Javascript on that page as pass-tru HTML.

Having the code in Domino has the advantage that the Domino security model is used.  So I need to authenticate first , to be able to use the SPA.
The same code can live anywhere else (eg. as a html page on any webserver),  but then I'd have to worry about authenticating the Ajax calls that retrieve the data.  
I set the Page to be the "Homepage" of the Database .

I use several javascript libraries, Jquery and Chart.js.  

For Chart.js, there's several ways to include the code, I chose  to use a Content Delivery Network ( )

<script src="" integrity="sha256-vyehT44mCOPZg7SbqfOZ0HNYXjPKgBCaqxBkW3lh6bg=" crossorigin="anonymous"></script>

For Jquery, I learned that the "slim" version does not have the JSON libraries, so use the minimized or full version.


Chart.js is a simple charting engine, that is easy to use and apparently also very commonly used.
I did have problems getting it to work correctly with my Domino Data, but that turned out to be related to Domino, not to Chart.js.

The samples that are out there for Chart.js generally do not include dynamic data, so here's how to use dynamic data from Chart.js using Domino.


What worked best for me,  is to initialize the Chart in the $.document.ready function.  Without Jquery, you can do the same with window.onload .

The chart is stored in a global variable, myChart, so it is accessible from everywhere.

The trick here, is to initialize the Chart's data and labels as empty arrays.  The arrays will be loaded with data in the next step (the title is also dynamic, you may notice).

In this sample, I have 2 datasets, and only at the end of this function, I call the first load of the data (updateChartData)

<script language="JavaScript" type="text/javascript">
var pageNumber = 0;
var pageSize = 24;
var myChart = {};
// prepare chart with an empty array for data within the datasets
// 2 datasets, 1 for EUR , 1 for ETH
$(document).ready(function() {
   // remove data button needs to be disabled when we start .
   document.getElementById('removeData').disabled = true;
   var ctx = document.getElementById("canvas").getContext("2d");
   myChart = new Chart(ctx, {
   type: 'line',
                   labels: [],
                   datasets: [
                                                   label: "EURO",
                                           data: [],
                                           borderColor: '#ff6384',
                                           yAxisID: "y-axis-eur"
                                           label: "ETH",
                                           data: [],
                                           borderColor: '#36a2eb',
                                           yAxisID: "y-axis-eth"
   options:  {
                   responsive: true,
                   animation: {         easing: 'easeInOutCubic',
                                   duration: 200,
                   tooltips: {
                                               mode: 'index',
                                                  intersect: false,
           hover: {
                               mode: 'nearest',
                               intersect: true
                     scales: {
               xAxes: [{
                   display: true,
                   scaleLabel: {
                       display: true,
                       labelString: 'History'
                 yAxes: [{
                           type: "linear",
                   display: true,
                   position: "left",
                   id: "y-axis-eth",
           // grid line settings
                   gridLines: {
                       drawOnChartArea: false, // only want the grid lines for one axis to show up
               }, {
                   type: "linear",
                   display: true,
                   position: "right",
                   id: "y-axis-eur",

Load data

The getJSON call (Jquery) connects to the Domino view. and gives 3 parameters :
- pagesize -  set to 24 to retrieve the last 24 documents (there is a document generated every hour by the Python cron job)
- page number  - set the paging - initially set to 0.
- systemcolums = 0 - avoids Domino specific data being returned (data that we'll not use anyway in this scneario)

The JSON that is retrieved from the Domino view is now loaded into an array of objects, that we can loop through.

The Chart data is directly accessible :
Labels :
Dataset 1 :[0].data
Dataset 2 :[1].data

The last call , myChart.Update, updates the Chart and redraws the chart.

var updateChartData = function(ps,pn) {
           type: "GET",
   myChart.options.title =  {                 display:true,
                                   text: 'Last 24 hour performance - ' + $, "d MMM yyyy HH:mm")
   $.getJSON("/dev/dataservices.nsf/api/data/collections/name/GraphData?systemcolumns=0&ps="+ps+"&page="+pn, function(data){
           console.log(" Loading page " + pn + " with pagesize " + ps + " returned " + data.length + " entries");;
           for (var i=0; i < data.length; i++) {
                   //console.log( " index: " + i + "  EUR : " + data[i].TOTAL_VALUE_IN_EUR );
           //shift to delete first element in arrays, not necessary in this case

This is the end result :
Image:Trying out Domino data services with Chart.js


To code the buttons, I used an EventListener (copied from the Chart.js samples : )
However , they did not work as expected initially.

On every click, the whole page reloaded - this is not what you want in a Single Page Application !

To counter that, I added the "e" in the function to pass the Event handler , and then use preventDefault,  to avoid reloading of the page.

$( "#addData" ).click(function(e) {
    // --------- prevent page from reloading ------

    // ----
   console.log( " Retrieving page : " + pageNumber );
   updateChartData(pageSize, pageNumber);
   document.getElementById('removeData').disabled = false;

Without Jquery, it would look like this (it needs some additional code for cross browser compatibiiltiy).
The first line is there for cross-browser compatibiltiy (Firefox does not know window.event, that is actually an ugly IE hack).

document.getElementById('addData').addEventListener('click', function(e) {
    if(!e){ e = window.event; } ;

   console.log( " Retrieving page : " + pageNumber );
   updateChartData(pageSize, pageNumber);
   document.getElementById('removeData').disabled = false;

Only after I made that change, I realized that this behaviour was in  fact caused by Domino, and that disabling the Database propery "Use Javascript when generating pages" would fix this.
Why our Domino developers ever thought it was a good idea to put HTML forms in Pages, I will never understand (I understand they used this in Forms).

And in my testing, I still needed the preventDefault, even with the Database property set .....

Some after the fact googling, suggests to me that using preventDefault is in fact the way to go (eg. )

Lessons learned

  • Using a Domino Page to host the Javascript code, enables the Domino security model .
  • I forgot about the Domino quirks with regards to web applications (e.preventDefault)
  • $.getJSON can be set up using $.ajaxSetup , although it's not necessary.
  • I didn't find good Chart.js samples for dynamic loading of data.

Since we're talking Ethereum, you may of course donate here :-)  0x5ac81ec3457a71dda2af0e15688d04da9a98df3c

    Check limits on open files for running processes

    Tom Bosmans  10 November 2017 17:02:41
    OK, setting the correct limits in /etc/sysconfig/limits.conf, and messing around with ulimit can leave you thinking everything is ok, while it is not.
    This little line shows you an overview of all the running java processes, to quickly check the Open File limit is correct .

    check the limits (open files) for all running java processes
    (as root)

    for i in $(pgrep java); do prlimit -p $i|grep NOFILE; done

    In this example, you see that there's just 2 of the jvm's are running with the correct limits.  The easiest way to resolve this (if  /etc/sysconfig/limits.conf is correct, and you have a service that starts your nodeagent) , is to reboot :

    NOFILE     max number of open files               65536     65536
    NOFILE     max number of open files               65536     65536
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096

    DKIM deployed on my mail servers

    Tom Bosmans  16 June 2017 10:40:42
    After moving my server to a new physical box (and new IP Address), some of the more difficult large mail systems started rejecting mail from my domains.
    Google was OK with my mails, although not ecstatic, but Yahoo and especially Microsoft considered my systems dangerous apparently.

    I googled around, found a lot of crap information, but resolved the issue and improved my mail setup in the end.  Turned out that I should be using TLS (for secure smtp) and DKIM (DomainKeys Identified Mail - )

    The bad stuff

    - There's a lot of links advising you to use Return Path (ao. here :
    Don't invest time here.  It's a service for spammers, I would say (they call it "email marketing").  You need to register and likely never get a response anyway.  
    - Domino does not support DKIM natively, and likely never will (
    - Microsoft (with all their domains -,, ...) are very tricky
    - Yahoo is difficult as well, but should you care ?  You shouldn't be using Yahoo mail anyway these days.
    - MailScanner breaks DKIM, so requires changes in the configuration (the problem being that it
    It's a little tricky to find out all the details - because most test tools identify that "dkim is working", while google complains ....
    - Postfix works with Letsencrypt certificates, but again , the information on the internet is sometimes incorrect or incomplete at best.
    - DKIM relies on DNS configuration, and that can be tricky (depending on your DNS provider or your DNS server)

    The good information

    - Postfix support DKIM through the opendkim milter add-on (
    - testing DKIM can be done using a tool like this  :  
    Very handy, fast, easy, no registration.
    - the proof is in the pudding, and sending mail to (Google) actually shows the information nice and tidy.
    - Letsencrypt and Postfix work together nicely once the setup is done correctly.

    Let's get to work

    So what I had to do, in a nutshell :

    • Change my Domino configuration , so also send outgoing mail through Postfix.  This is as simple as setting the "Relay host for messages leaving the local internet domain".
      This is necessary, to allow opendkim to sign the outgoing mails as well.
      Relay host for messages leaving the local internet domain:

    • Configure Postfix - add the milter for dkim (and configure TLS with LetsEncrypt) in
    • Configure MailScanner  - apply the settings that are in the configuration file, that mention dkim.
    • Configure opendkim (generate the keys)
    • Configure DNS (create a new TXT record for the key you created.  In general, you can use "default", and you require a record for default._domainkey. )
    • Verify your key using opendkim-testkey
    • Test the DNS entry (eg. using , or using host (eg. host -t txt
    • Test the mails you send out (use  ).  Or use gmail to check.

    Use Gmail to check your settings

    Gmail actually has the possibly by default to verify various settings.  
    Next to the "to me", click the dropdown button.
    In the case that you have set up DKIM correctly, it will show a "signed-by" line.  You can see TLS information here as well .
    Image:DKIM deployed on my mail servers
    Additionally, you can also go to "Show original"
    Image:DKIM deployed on my mail servers
    This will show the source of the  email, and has a summary header that contains important information.
    As you can see  , it shows that DKIM has PASS.  If it says something else here, you need to go back to the drawing board.
    Image:DKIM deployed on my mail servers

    This can contain a lot more options, btw.  If you use DMARC as well, it will show up here too.  For my domain, you see the SPF option.

    Microsoft's domains

    Once you're certain DNS is setup correctly and you're no open relay, you can easily contact Microsoft directly to unblock your mail server(s) here :
    This immediatly works for, and the other domains.

    This took only a few hours in my case.

    Server outage (disk failure)

    Tom Bosmans  6 June 2017 10:08:04
    Yesterday morning, I noticed that my server was running slow .   I couldn't see any processes hugging up resources, though.

    Instead of really looking into the problem, I decided to reboot the machine .  That was a mistake.  As the server did not come back online, I realised that it was likely that there was a problem with the disks .
    I have a dedicated server at , and it's really the first time I run into problems .  I can really recommend this hosting provider.

    The server has a software raid with 2 disks , running Cent OS.  
    I assumed that mdadm was trying to recover , but had no way of knowing, since the machine did not come back online.  
    At this point, I got very scared - I feared loss of data.

    Fortunately, the guys at hetzner supply a self-service console to the machine (you start a rescue system).

    I could log in using that mechanism, and then I was able to mount the filesystems in raid.  It was quickly clear that indeed, 1 disk died.

    Now I could do 2 things :
    - request a disk replacement.  This was going to take a while, and during that time I don't have a redundant disk.  And chances are high, when 1 disk fails , the other will also fail.
    - move my installation to a new server.  I know that between ordering a new server, and having the OS installed on it ready for use, only takes around 1 hour (did I mention these guys are great ?  Note that this is physical hardware, not some cloud service  !)

    I decided to go with option 2 .

    This consists of copying the data from the old server to the new one (this took a long time), reinstalling the software , reapplying the configuration for my mail servers and other stuff, and then adjusting the Domino configuration (change the ip addresses).

    In the end, it took me 10 hours in all, to get the new server up and running...including copying the data.   Now I just have to decommision the old server , and I'm done :-)

    Kubernetes and dns

    Tom Bosmans  28 April 2017 11:00:25
    Kubernetes apparently doesn't use a host file, but instead relies on DNS.  So when setting up Orient Me (for Connections 6) on a test environment, you may run into problems.

    Then you may want to look back to this older blog entry :
    Setup DNS Masq

    You're welcome :-)

    To keep with the docker mechanism, look at this to make your life easier :

    Note that this is obviously not the only solution,  you can also follow these instructions :

    Security Reverse Proxy with Connections - forcing all trafic through the interservice url

    Tom Bosmans  20 April 2017 15:17:50
    In a recent project, we are using IBM Datapower as a security reverse proxy to handle authentication and coarse grained authorization for Connections 5.5 .

    The approach we follow is similar to what I have described here :

    In short : you want to avoid that the interservice traffic passes through the reverse proxy (Datapower or Webseal , that is not relevant at this point).

    The picture below shows that you want to have 2 paths of access :

    - for users and api access etc : through your reverse proxy

    - the internal , backend connections : through your http server

    Image:Security Reverse Proxy with Connections - forcing all trafic through the interservice url

    To do that , you need to make sure you have different values for the href/ssl_href and interservice values in LotusConnections-config.xml.

                 <sloc:static href="" ssl_href=""/>
                 <sloc:interService href=""/>

    You can see a lot of things here :

    - you need to do this for ALL services defined in LotusConnections-config.xml

    - all url's are https

    - the interservice url is different from the static.

    - the interservice url points to the HTTP server (or a load balancer pointing to the HTTP Servers)

    - the static urls point to your reverse proxy (or the load balancer pointing to your reverse proxy)

    - bonus points  : put the interservice url in different domain from the static urls, to avoid potential xss problems.

    Some additional remarks :

    - do not use the dynamicHost section, that is generally recommended when using reverse proxies

    - set the forceConfidentialCommunitication flag to "true" .  ALWAYS.  You don't want to use http in these times, you always want to use https.

    Now for the problem : although this should instruct Connections to use the internal http server for interservice requests, in reality, the backend still makes calls to the static urls.

    That is very annoying : if you don't allow access from your back-end servers to the reverse proxy, everything will fail.  If you do not allow unauthenticated access through Datapower (or your reverse proxy), widgets don't render.

    This becomes apparent for Widgets in the following manner :

    [3/27/17 19:07:21:459 CEST] 00000149 IWidgetMetada W org.xml.sax.SAXParseException: The element type "meta" must be terminated by the matching end-tag "".
    [3/27/17 19:07:21:535 CEST] 00000149 IWidgetMetada W org.xml.sax.SAXParseException: The element type "meta" must be terminated by the matching end-tag "".
    [3/27/17 19:07:21:845 CEST] 000001c6 AbstractSpecF W org.apache.shindig.gadgets.AbstractSpecFactory SpecUpdater An error occurred when updating Status code 500 was returned. Exception: org.apache.shindig.common.xml.XmlException: The element type "meta" must be terminated by the matching end-tag "". At: (1,415). A cached version is being used instead.
    [3/27/17 19:07:21:847 CEST] 000001c7 AbstractSpecF W org.apache.shindig.gadgets.AbstractSpecFactory SpecUpdater An error occurred when updating Status code 500 was returned. Exception: org.apache.shindig.common.xml.XmlException: The element type "meta" must be terminated by the matching end-tag "". At: (1,415). A cached version is being used instead.

    This means that the back-end application (the WidgetContainer in this case) tries to retrieve the Widget configuration xml file, through the Reverse Proxy.  Because the Reverse Proxy does not allow unauthenticated acces, it presents a (html) login form .  That is interpreted as "invalid xml" .

    Now by following the instructions here, to allow unauthenticated URI's through your reverse proxy, this can be resolved.

    If you don't allow access from your backend to your reverse proxy, you're still out of luck though.  And that previous part does nothing for any custom widgets or third party widgets you may have deployed (eg. Kudos Boards)

    Core Connections

    There is an undocumented solution for this, luckily, that you may get through support.

    You need to edit opensocial-config.xml , in your Deployment Manager's LotusConnections-config directory.

    After this line :


    Add these lines :

         <proxyInterServiceRewrite name="opensocial" />
         <proxyInterServiceRewrite name="webresources" />
         <proxyInterServiceRewrite name="activities" />
         <proxyInterServiceRewrite name="bookmarklet" />
    `        <proxyInterServiceRewrite name="blogs" />
         <proxyInterServiceRewrite name="communities" />
         <proxyInterServiceRewrite name="dogear" />
         <proxyInterServiceRewrite name="files" />
         <proxyInterServiceRewrite name="forums" />
         <proxyInterServiceRewrite name="homepage" />
         <proxyInterServiceRewrite name="mediaGallery" />
         <proxyInterServiceRewrite name="microblogging" />
         <proxyInterServiceRewrite name="search" />
         <proxyInterServiceRewrite name="mobile" />
         <proxyInterServiceRewrite name="moderation" />
         <proxyInterServiceRewrite name="news" />
         <proxyInterServiceRewrite name="profiles" />
         <proxyInterServiceRewrite name="sand" />
         <proxyInterServiceRewrite name="search" />
         <proxyInterServiceRewrite name="thumbnail" />
         <proxyInterServiceRewrite name="wikis" />

    Sync your nodes, and restart everything.  All trafic for the standard widgets (eg. on Homepage or in Communities) will now go render correctly.
    Note that this is not valid for CCM nor for Mobile, these have separate settings in library-config.xml and mobile-config.xml respectively where you can select to "use interservice url" .
    For Docs, the configuration is done in the json configuration files .  I'm not going into these details here.

    Custom or third party Widgets Connections

    So great, the core Connections widgets are now rendering, and all trafic for them is now going through the interservice URL you defined .

    There is however the small problem of custom widgets.  These are not handled by the rules in opensocial-config.xml .
    We use Kudos Boards (, but this next section is valid for all (most) custom or third party widgets you need to behave properly.

    There's 2 more files to edit :

    • service-location.vsd: to allow you to edit LotusConnections-config.xml
    • LotusConnections-config.xml

    And you need widget-config.xml , and still need to edit opensocial-config.xml .


    Find the custom widget's configurationn in widget-config.xml.  In this example, we're looking at Boards (this is a sample, not actual widget definitions !).
    You need the defId value here, so in our case, Boards.

    <widgetDef defId="Boards" description="Kudos Boards widget" primaryWidget="true" modes="fullpage edit search" themes="wpthemeNarrow wpthemeWide wpthemeBanner" url="/kudosboards/boards.xml" showInPalette="true" loginRequired="true"/>


    In service-location.vsd , add a line for every custom/third party widget.  You need to use the defId name from widget-config.xml in the previous step

    The values here need to match the Widget definition in widget-config.xml, the service reference in LCC.xml, and the proxyInterServiceRewrite name in opensocial-config.xml.


    In LotusConnections-config.xml, you then add a serviceReference entry for every custom (or third party) widget.  To be able to do that, you must have changed the service-location.vsd .

    <sloc:serviceReference enabled="true" serviceName="Boards" ssl_enabled="true">
                 <sloc:static href="" ssl_href=""/>
                 <sloc:interService href=""/>


    Finally, in opensocial-config.xml, add the rule for your custom widget, after the rules you added earlier.

         <proxyInterServiceRewrite name="opensocial" />
         <proxyInterServiceRewrite name="thumbnail" />
         <proxyInterServiceRewrite name="wikis" />
         <proxyInterServiceRewrite name="Boards" />

    That is it.  You now sync your nodes, and restart everything.  Your custom widget will now work correctly .

    If all else fails ...

    Now there is a simpler solution to all of this .  You can use your /etc/host file to just match the public url ( to the IP address of the internal http server.  
    I don't particularly like this solution, though.  It is difficult to maintain , and it probably breaks your company's standards and rules.

    CCM installation problems with Connections 5.5 - Connections Admin password changes

    Tom Bosmans  5 October 2016 14:20:13
    During installation of CCM with Connections 5.5 using Oracle RAC cluster by my colleagues, they ran in to a number of problems and got the environment in a completely broken state.

    The core problem is that FileNet does not support the modern syntax for jdbc datasources.  This technote explains what to do.

    That is simple enough .

    However , my colleagues continued on a detour, where they also changed the ConnectionsAdmin password.  That created a bunch of problems on it's own.
    It turns out that the Connections 5.5 documentation is incomplete on where to change the occurences of the Connections Admin user and/or password.

    The CCM installer mostly uses the correct source for the username / password (the variables you enter in the installation wizard or the silent responsefile).
    But the script to configure the GCD datasources , for some reason uses a DIFFERENT administrator user.

    It goes back to look at the connectionsAdminPassword variable that's stored in the file, in your Connections directory (eg. /data/Connections/ )

    So when you change the password for the Connections Administrator, don't forget to update it in the file as well , before running the CCM installation.

    "connectionsAdminPassword": "{xor}xxxxxxxxxxx",

    In the end, this took me over 1/2 day to resolve, also because the guys working on it enabled all traces they could find so I also ran into an out-of-diskspace exception ..... , but mostly because the installation process for CCM is slow.