Loading...
 

No Food for Thought

Food is something you should provide to your brain long before coming to this blog. You will find no food recipes here, only raw, serious, non-fake news for mature minds.

Issues using GNU/Linux as a "desktop" (PC)

admin Saturday February 6, 2016

I hit numerous issues when I moved my desktop from Microsoft Windows to GNU/Linux. When this started to feel like too many, I remember starting a document listing the main issues. Eventually, I hit many more issues, became familiar enough with the system to report some of them, and started reporting bugs, a process I am still far from having completed more than a decade later (thanks to our ability to produce new bugs being much more developed than our ability to reproduce our existing bugs). Therefore, I lacked the time needed to work on that list, and at some point, probably after having realized how huge that list would have to become, I gave up and must have deleted it.

Therefore, I was amused when I stumbled upon another list created by someone with the same objective last week. After going through that list, I guess I was right to give up on my list - it is a huge list of problems (and although I learned of that list because it received a 2016 update, it remains a huge list, despite the decade of progress since I started mine). In fact, I already knew there were millions of bugs and such a list could be as long and anecdotal as I wanted it to be, but overall, I find that Artem's long list does a good balance between listing overly specific issues and failing to list serious issues. Artem's list is highly imperfect - it is sometimes repetitive, at times unsubstantiated, and as he acknowledges himself, some issues are more disadvantages of GNU/Linux in comparison to other OS-en (in other words, it is a lot more than a list of bugs). There are many issues listed which I did not experience myself or even read about, and I am far from agreeing with every claim on that page. Some points just seem severe. Many issues affect software which is not used by most GNU/Linux users. That being said, I think that overall, that document gives someone considering to move their PC to GNU/Linux a good idea of what this involves.

One of the reasons why I gave up on my list is that determining whether an issue should make the list was hard. It depended not only on the issue's importance, but on the affected software's popularity. In any OS, virtually no software is used by everyone. But with GNU/Linux, that problem is made a lot worse by high fragmentation. And indeed, the very issue of fragmentation is part of the 9 general issues Artem listed is his summary. While some could point out that this is not an issue per se, it could instead be called a meta-issue, since fragmentation means extra complexity, more difficulty obtaining support for any specific GNU/Linux install and less developers available to work on each piece of software which has alternatives. Even though Artem's list has been updated, it is unfortunately hard to use it to estimate how fast issues are solved (which was probably my main goal), but it could be much faster without such fragmentation.

I have never suggested anyone to switch a PC to GNU/Linux. At best, I might have significantly influenced 4 people to make the switch. All of these are bachelors in computer science or computer engineering. At least 3 of these no longer use GNU/Linux as their primary OS. I do not regret having put my efforts into improving GNU/Linux rather than into directly recruiting new users. In fact, when reading Artem's list, it seems unreasonable for a system administrator to familiarize with GNU/Linux for the sole purpose of switching its own PC to GNU/Linux, unless that administrator intends to improve GNU/Linux... although as soon as I go back to Microsoft Windows for a day, I'm reminded that GNU/Linux is far from having a monopoly on problems.

While most of my contribution to Artem's list is probably the absence of even more items, I have contributed to some of the pages the list links to. But these contributions are a bit ironic:

  • One page is a KDE bug report which was closed by a KDE developer because it does not affect Wayland. My contribution there was to point out that the fact that KDE could be used without that bug did not mean that the bug was solved, since X.Org is still affected. The faulty developer did not reopen, and the ticket remains closed.
  • Image The other page linked I significantly contributed to is Wikipedia's article on Heartbleed, which I expanded, fully reviewed and maintained until it was assessed as a "good article". The very first Wikipedia article dedicated to an open source software bug. In a sense, Heartbleed was a great bug, as it highlighted how vulnerable free software can be, and was the trigger needed for the Core Infrastructure Initiative, the "new initiative" Artem mentions in his section "On a positive note". I like to think that the "Root causes, possible lessons, and reactions" section I created helped Jim Zemlin convince organizations to join the CII.

    I believe free software's success in the last decade has made its weaknesses obvious. Technically, free software is neither more or less secure than proprietary software. Each piece of software has its own security. But when I joined the free software movement, many claimed a piece of free software was generally more secure than a proprietary equivalent - for example, a GNU/Linux distribution would be safer than Microsoft Windows. Since then, history has disproved that myth, and Artem's article reflects that very well. Free software projects themselves did not necessarily improve. A decade after I filed Debian's ticket #339837, Debian has made some progress. http://security.debian.org no longer claims that Debian's average response time to security issues is under 48 hours, but still claims that "Debian takes security very seriously.", now without any supporting statistical claim. But as I write these lines, Debian's security bug tracker lists over 10 high-impact vulnerabilities (in the current Debian version) acknowledged by Debian itself, along tens of vulnerabilities still unrated.

    Some projects have performed more serious changes to actually improve their security. Following Heartbleed, OpenSSL has adopted a roadmap and a security policy. It created a blog, added members to its team, improved its performance reacting to reports of security issues, performed code cleanup, started a code audit, adopted a code review system and a code review policy.

    Unfortunately, others thought securing OpenSSL required forking. Therefore, in the wake of Heartbleed, a major fork appeared: LibreSSL (not to mention BoringSSL). As if OpenSSL and GnuTLS were not enough, we now have 3 equivalent libraries, and many lesser-known forks and equivalents. The very meta-issue Artem's Summary denounces in its fourth point.
    So while Heartbleed's long-term effect was great in a sense, in another sense, if lack of resources was the root cause of Heartbleed, it is not clear that a reaction which worsens fragmentation will be helpful. I will not claim that OpenSSL is not more secure than it was before Heartbleed, but in the long term, I doubt the reaction is very helpful for TLS library users.


A lot of my work on GNU/Linux was focused on the desktop. I am proud of the difference I made. Yet, I am not so proud of the result at this point. A lot has changed since I started working on GNU/Linux, and yet, much remains the same. Thankfully, one thing also remains unchanged: users of fully free software GNU/Linux distributions do not need to worry about vendor lock-in from their operating system.

The cost of quality / La prise de qualité

admin Sunday January 10, 2016

Francophones are used to poor translations. In the end, quality is costly.

But crappy can be equally costly:

Allergan's coupon - Original version
Allergan's coupon - Original version
Back of Allergan's coupon - An apparently generous translation to French
Back of Allergan's coupon - An apparently generous translation to French

Careful observers will note that while the English version is a coupon, the French version is not. Attentive merchants can actually point out that the French version is a mail-in rebate and refuse to pay more than 2.50$.

Rant: Seatpost-mounted rear mudguards (Polisport CROSS COUNTRY )

admin Saturday January 9, 2016

Winter is a hard time for utilitarian bikers in Quebec. And due to the usage of rock salt for de-icing, it is even a lethal time for bikes. I always use a different bicycle for winters, because each bicycle used in winter is sacrificed. Except for those able to arse themselves enough to perform sufficient maintenance, a bicycle not designed for winter is usually pretty much unusable after a single season.

Therefore, I have been buying one bicycle per winter for more than a decade. Winter biking has lots of downsides, but since I started working in an office, the need to keep my clothing acceptably clean has created a new problem - equipping each new winter bike with efficient mudguards. This simple requirement has turned out to be nearly as problematic as obtaining the bike itself.

It may be a surprise to those who haven't tried winter biking in snowy countries, but fenders are not only useful on winter bikes, but more useful than during summer. Of course, there is no water during the coldest part of winter. But the winter bike is needed from the moment de-icing starts to the moment where streets are cleaned. This comprises an initial period of several weeks where streets are very frequently wet due to melting, as well as another such period at the end. Snow melting is not the only reason why fenders are required for winter. Again, it's rock salt which makes matters worse, by keeping streets wet even under 0 °C, and most importantly, by turning white snow and clear water into dirty brown slush and liquid. It is one thing to come to the office with a dirty and rusty ugly bike. It is another one to also go with a coat and/or backpack covered by dirt.

One would think you could reuse the previous winter bike's mudguards. But since each bike is different, I often have to buy a different model. Or, there is just so much rust that the previous mudguard cannot be removed. Since each installation is different, I have spent hours furiously installing mudguards (sometimes with poor instructions) on sometimes hardly compatible bikes. Therefore, a few years ago, I started asking whichever store sells the bike to sell me compatible mudguards and to install them (I officially gave up in 2012, swearing to myself never to install mudguards by myself again).

Just buying mudguards and getting them installed is sometimes nearly as costly and/or time-consuming as buying a used bike. But the most frustrating part is when the installation is deficient. The front mudguard is usually a lesser problem, although I have had several break. The rear mudguard is often worst because some models have no good support to attach classic mudguards (for example, on bikes with rear shock absorbers).

This year, my winter bike has rear suspension. I was very skeptical when the store recommended a seatpost-mounted mudguard. But since the seller assured me the fender would stay adjusted and efficient, and since there were no other options, I did go with a seatpost-mounted mudguard - specifically a Polisport CROSS COUNTRY mudguard (aka SPLASH QUICK GUARD), supposedly designed for mountain bikes. The product requires no tools and very little time to install… according to the manufacturer anyway. Indeed, the initial installation required no tools and 2 minutes.

Polisport's CROSS COUNTRY bike mudguard
Polisport's CROSS COUNTRY bike mudguard


It only took a couple kilometers to notice that due to the shock absorber, the mudguard would often collide with the tire. So I had to re-install the mudguard higher. Next, it only took a few rides to notice that as with other seatpost-mounted mudguards, its angle would vary, losing its alignment with the tire and causing crappy efficiency. I then removed the bottle cage to allow more room for my hands, so that I could tighten the attachment as much as possible (because it's not just that the installation requires no tools - you cannot use tools to maximize the grip, unless you're willing to risk breaking the plastic). Again, it was just a matter of a couple of rides to notice that even if the CROSS COUNTRY mudguard stays in the optimal position, it does not stop all water. Frankly, this is not a surprise when you look at its width and how badly it fits with my wheel (to be fair, the efficiency is probably lowered because my shock absorber forced me to keep extra distance between the mudguard and the tire).

So last weekend, I gave up, somewhat broke my commitment to myself and resorted to patching the CROSS COUNTRY with thick cardboard cut from a jumbo cereal box. I taped the piece face down (so that the part stopping the water is the most water-proof) on top of the mudguard. Bending thick cardboard in 2 directions simultaneously is non-trivial. In total, I suppose I spent nearly an hour {re,}installing that mudguard so far. Since that patch is installed, I didn't notice any leakage. The result is however esthetically imperfect:

Chealer's bio-enhanced Polisport-based rear mudguard - aerial perspective
Chealer's bio-enhanced Polisport-based rear mudguard - aerial perspective
Side view of my patched mudguard
Side view of my patched mudguard


The positioning also remains imperfect - I felt another collision between the mudguard and the tire today. But I was satisfied enough to refrain myself from this rant until an anecdote 2 days ago. My bike was next to another one in a rack designed for 4 bikes. This is uncommon during winter here, so I couldn't help but to look at the other bike. I then couldn't help but to notice the bike's rear mudguard, since that was the exact same model as mine (still in unpatched form). And I obviously couldn't miss the fact that as a true Polisport CROSS COUNTRY mudguard, it deviated from the wheel by about 1°, meaning it was surely letting half of the water go through, towards its poor owner.

I will not claim that seatpost-mounted mudguards should not exist. But Polisport's CROSS COUNTRY mudguard is not wide enough to do a good job on mountain bikes. And it is way too unstable. Do not buy it.

Update: a month after writing this, I passed an adult winter biker who was walking next to his bicycle, on which a not-so-young child was sitting. I was considering asking them why they were traveling that way when I was passed by a teenage winter biker. I realized his bike had a misaligned seatpost-mounted rear mudguard. When I caught up with him, I confirmed it was Polisport's CROSS COUNTRY.

Optimizing the optimization - Performant Incremental Updates for Packages files

admin Wednesday January 6, 2016

In 2006, Michael Vogt implemented support for PDiff (differential Packages) files in APT to optimize the process of updating Packages. At the time, the Packages file, which was already several MB-s compressed, needed to be downloaded entirely to update package indices. Joerg Jaspert and probably other members of the archive maintenance team implemented support for generating PDiff files on the archive side.

Unfortunately, APT's performance when applying (several) PDiff files was quite poor, sometimes worst than the performance for a non-incremental update, as reported (at least) in ticket 372712 and ticket 376158, which was particularly problematic for testing users - until APT 1.1.7, a nice X-mas present from Julian Andres Klode, who identified the bottleneck and optimized the process.

I haven't had the time to test testing since Jessie's release, but I'm starting to miss it smile
I wish to thank Michael, Anthony Towns, Andreas Barth, Joerg and others who contributed to the initial implementation, as well as jak, who's finalizing this work a decade later with an optimization job even more thankless than the initial implementation.
The next step? Differential updating of packages, with a lowercase "p"... which promises to be even harder to get right.

Finally, I'm using this opportunity to thank APT contributors - particularly its current maintainers mvo and jak - for all of their work. Progress has been slow over the last decade, but the direction is right, and each step is appreciated.

Electoral reform coming to Canada?

admin Tuesday January 5, 2016

After the last election, I wrote about the federal electoral reform promised. Nothing has really changed since then, which is why I am writing a new post.

Since the election, I have seen electoral reform discussed several times on the CBC's At Issue panel, by several commentators. Just yesterday, Tasha Kheiriddin mentioned reform on another CBC panel. Since the election, I am not sure any minister received more media attention than Maryam Monsef.

10 weeks after the election, the media still hasn't forgotten the Liberal Party's promise of electoral reform. It really seems like the government will propose electoral reform. What will happen - what system will be proposed, whether a referendum will be held, and the result of such a referendum - is still unknown, although the Liberal Party ruled out a referendum just last week. But clearly, the next months of Canadian politics will be exciting to watch. The system proposed will certainly be extremely suboptimal. But any change will probably be the greatest advancement in governance at the federal level since women's suffrage, a century ago. Canadian citizens could realize in 2019 that they have (slightly) more political power than checking a box every 4 years. The next generation may realize that democracy is not merely FPTP, except if we want it to be kept in its infancy.

On the other hand, if loyalist Canadians fear taking the lead on the UK for once and reject reform by referendum, governance reform could become a topic as taboo as constitutional changes and could be set back by decades.

Finally, while achieving proportional representation is just one governance improvement for me, I would like to congratulate Fair Vote Canada for all they have done during the campaign and after. FVC probably did not influence the results of the last election in the end, but your continual activity may still prove useful in the upcoming debate. Thank you, Anita Nickerson, Kelly Carmichael and all others for all the energy you invest in our goal. Keep up the good work.

2017 update: No

Civilization: Beyond Earth on Debian GNU/Linux? Good luck

admin Saturday December 26, 2015

Ever since I moved to GNU/Linux, the video game I missed the most was Sid Meier's Civilization. The only version ported to GNU/Linux was Sid Meier's Alpha Centauri, probably my favorite version. But that port seemed to be an afterthought. One needed to look for the special installer, which was buggy.

With the release of CivBE, I was under the impression that Firaxis was finally truly making GNU/Linux a supported platform for Civilization. The GNU/Linux version was released less than 2 months after the Microsoft Windows version. Mac/Linux was even the fourth item in the game's official FAQ. For the first time in many years, I put a video game on my wish list. To my surprise, my mother offered it to me this week (I suppose she did not realize it was the same series I spent so many hundreds of hours playing over nearly 2 decades razz).

I was also happy to see the game's box didn't have the huge Games for Windows banner anymore. Unfortunately, system requirements claimed Windows was necessary. But I thought that was just randomly written system requirements, as usual (how credible are requirements asking for "Windows Vista SP2/ Windows 7" for a Q4 2014 game anyway?). I was less impressed when I inserted the DVD and realized there was absolutely no material for GNU/Linux, nor any documentation explaining where to go. And now, I cannot even find instructions on the Internet. The FAQ item mentioned above still discusses a Linux version as something future (although Wikipedia says it was released 2014-12-18). And I cannot even find installations instructions when searching on Google.

Is Civilization: Beyond Earth beyond Windows? I am far from being convinced at this point.

Hopefully, at least the game will be stable - without serious bugs as those which I experienced playing the original versions of Civilization III and V (let alone serious networking issues with Civilization IV).

Memory usage of Apache's PHP children processes

admin Monday December 14, 2015

I ran a PHP benchmark for which I allowed PHP to take as much memory as it wanted. The benchmark worked, but I then realized Apache was using 2 GB of RAM. The parent process was fine, but it turned out the apache2 child process which had run the benchmark was still using 2 GB (RES).

I thought that was abnormal, but I verified on ##php and eventually had confirmation from several people that - to my great surprise - this is not a memory leak. This behavior is expected. And indeed, I can re-run the same benchmark and it will never run out of memory if it succeeded to reserve enough memory the first time. I am not a sysadmin, but that was still quite a shock. I was told PHP has its own memory manager, and only releases memory if the Apache child is restarted. In reality though, other processes (including Apache children) will manage to "steal" memory reserved by idle children. This is surely the part I find most amazing. I am curious to learn how Linux manages that.

So, the memory Apache grants to PHP children will sometimes only be released when these children processes are restarted, but other processes will manage to reclaim that memory if needed. At the very least in our configuration (Debian 8's PHP 5.6.14 on Apache 2.4.10 with prefork MPM).

One important word above is "sometimes". For some reason, children sometimes immediately release their memory. I initially thought it took 2 executions for memory to stick, but a second execution does not always lock. Which is why I would welcome pointers to discussion of this behavior. It seems memory will not be freed if 2 requests come with little idle time in between (seconds).

The following shows well enough an Apache restart freeing 2 GB of RAM:

root@Daphnis:/var/log/apache2# free -h; grep Mem /proc/meminfo; service apache2 restart; free -h; grep Mem /proc/meminfo total used free shared buffers cached Mem: 3,0G 2,4G 660M 9,5M 688K 75M -/+ buffers/cache: 2,3G 736M Swap: 713M 276M 437M MemTotal: 3173424 kB MemFree: 675824 kB MemAvailable: 634736 kB total used free shared buffers cached Mem: 3,0G 216M 2,8G 9,4M 756K 88M -/+ buffers/cache: 126M 2,9G Swap: 713M 270M 443M MemTotal: 3173424 kB MemFree: 2951552 kB MemAvailable: 2917400 kB

Transition to the SI - A matter of numerous Ms-s

admin Saturday December 12, 2015
##php wrote:

(19:32:13) chealer: so if I consider that PHP's 0 ds should be 1 ds, then that proves my understanding that it's not the DB which adds that extra second.
(19:33:33) Literphor: chealer: What is ds? A decisecond?
(19:33:41) chealer: Literphor: yeah. it's all on a 1 Gb/s LAN, but that probably explain the 3 ds difference.
(19:33:45) chealer: Literphor: right
(19:34:03) Literphor: chealer: Heh you’re the first person I’ve ever seen use those units
(19:34:40) chealer: Literphor: Heh, you're not the first person telling me I'm the first person they see use those units.

dig(1) and other DNS clients sometimes taking 5 seconds to return the results of a local query

admin Friday November 27, 2015

After installing a few Debian VMs inside our Windows environment, I noticed very strange performance problems resolving local domain names on local DNS servers this week. Simple queries which should have taken milliseconds would sometimes be very slow. And these slow queries would constantly take 50 deciseconds to resolve - never 49 or less. It looked like a timeout, but logs had no such mentions, and it was hard to tell when the timeouts would occur, except that they would occur more on a first test after I stopped testing for a few minutes. For example, a trivial connection to a local MySQL server could take just above 50 ds to establish:

$ time echo 'SELECT 1;'|mysql -u [...] --password=[...] -h PC-0002
1
1

real 0m5.014s
user 0m0.000s
sys 0m0.004s
pcloutier@deimos:/var/lib/dpkg/info$


This was far from MySQL-specific. dig(1) would suffer from the same delays:

$ time dig @phobos.lan.gvq titan.lan.gvq

; <<>> DiG 9.9.5-9+deb8u3-Debian <<>> @phobos.lan.gvq titan.lan.gvq
; (1 server found)

; global options
+cmd

;; Got answer:

; ->>HEADER<<- opcode
QUERY, status: NOERROR, id: 15593
; flags
qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1


;; OPT PSEUDOSECTION:

EDNS
version: 0, flags:; udp: 1280

;; QUESTION SECTION:
;titan.lan.gvq. IN A

;; ANSWER SECTION:
titan.lan.gvq. 3600 IN A 10.10.1.29

; Query time
0 msec
; SERVER
10.10.1.23#53(10.10.1.23)
; WHEN
Fri Nov 27 12:14:42 EST 2015
; MSG SIZE rcvd
58



real 0m5.018s
user 0m0.012s
sys 0m0.004s
pcloutier@deimos:/var/lib/dpkg/info$

...where phobos.lan.gvq is a local DNS server, and titan is just a local hostname which is supposed to resolve very quickly. Attentive readers will notice that Query time indicates 0 ms. This is because the DNS query proper does take 0 ms. The delay comes from the resolution of the name server itself, which I specified by name. This cannot be reproduced with dig if the name server is specified by IP.

This turned out to be an IPv6-related glibc issue. The first big advance came from a Stack Exchange thread, which allowed me to confirm that the delay was due to a timeout in glibc's getaddrinfo(3). This can be achieved with high certitude by changing that delay using the resolv.conf timeout option. glibc's default timeout is 5 seconds. For example, if you notice that the delay decreases to 3 s after setting "options timeout:3", then you are clearly experiencing timeouts. If not, sorry, this post will not help you.

The next step was to determine whether that timeout was IPv6-related. This can be achieved by disabling IPv6 on the GNU clients, but it may be simpler to just set options single-request and single-request-reopen. If none of these helped, you know your problem is caused by timeouts, but the cause is different than ours, and the rest of this post will not help.

If disabling IPv6 helped but single-request and single-request-reopen do not, sorry, I do not know more about your issue. But if single-request or single-request-reopen helped, your problem must be similar to ours. Due to a glibc 2.9 change (see section "DNS NSS improvement"), getaddrinfo() often causes a communication issue between itself and the DNS server when querying either IPv4 or IPv6 addresses due to what Ulrich Drepper describes as server-side breakage. Since at least glibc 2.10, if glibc detects that glitch may have happened, it workarounds by re-sending the queries serially rather than in parallel, so the problem "merely" causes a timeout. If there is a firewall between your DNS server and you, see the Stack Exchange thread above. If a firewall issue is excluded and your DNS server is running Windows Server, you are probably experiencing the same incompatibility as ours.

I first thought our Windows Server 2008 [R1] servers were causing this because of an old bug, but according to a 2014 blog post, this still happens with Windows Server 2012 R2. Although the tcpdump shown on the Stack Exchange thread describes pretty well what is going on, I had to perform my own captures to understand why the timeout would only happen sometimes, and succeeded quickly enough. When the problem does not happen, getaddrinfo() queries both A and AAAA (IPv6) records in parallel in packets 7 and 8 and receives both replies in packets 9 and 10:

Capture 1 - no problem
Capture 1 - no problem

Packets 11 and 12 show the DNS query proper, since this capture shows the full activity for the dig command explained above.

When the problem happens, what was packet 9 in capture 1 is gone. Which is why getaddrinfo() retries 5 seconds later (after the gap between packet 26 and 30), in packets 30 and 32, but now sequentially:

Capture 2 - serial retry after 5 seconds timeout
Capture 2 - serial retry after 5 seconds timeout


Why does the problem happen in capture 2? Surely because of that extra color... the beige ARP packets at 24 and 25. In other words, in the first call, the DNS client's IP address is in the DNS server's ARP cache, so the server does not need to resolve the client IP address. In the second case, the DNS clients's ARP cache in the DNS server has expired, so the server needs to perform an ARP query before being able to send what would be packets 9 and 10 in the first case (I would have thought the server could figure out the ARP address from packets 22 and 23, but apparently that is not how that Windows works).

As explained in Microsoft's ARP caching behavior documentation, in recent Windows versions, an ARP cache record is [usually] maintained for a random time between 30 and 90 seconds after the last time it was used. This must be why that bug was pretty hard to track. Therefore, if the server and the client communicate at least each 30 seconds, this timeout should only be experienced once. This means that in the case of Windows Server DNS servers, the behavior would be the same if glibc didn't fallback to serial queries after the timeout.

Causes and solutions

I have not found a server-side workaround (besides, I guess, disabling IPv6). Unfortunately, I believe this needs to be worked around on every GNU client.

It is more interesting to try determining the root cause of this issue and definitive solutions. glibc developers consider it a Windows bug. But would Microsoft leave a bug which must be triggered millions of times per day unfixed for years?

Windows Server

The captures clearly show that glibc starts with the IPv4 query. Which means the Windows server can only send the AAAA reply after it can send the A reply. In general, that must mean it replies to both. But when the server has to wait for an ARP reply before sending its DNS reply, it may have received the AAAA request before it is able to send the A reply. I would need to perform a server-side capture to confirm that, but it could be that Windows detects that situation and decides to send a single reply to save bandwidth and/or favor IPv6 usage. If the goal was simply to favor IPv6, it would probably be better to just send the AAAA reply before the A reply.

Windows may be doing a heuristic optimization by guessing that the client just needs one address, which would certainly be wrong sometimes. This could be considered a bug in so far as failure to reply constitutes a bug.

DNS clients and the protocol

But there is certainly a client-side issue as well at least in this case. The client requests both an IPv4 address and an IPv6 address while it only needs one. Unless this is a strategy to minimize further queries, this is inefficient.

According to this Stack Overflow thread, it is not clear that requesting both A and AAAA records in a single DNS query is possible. And even that would not be the most optimal solution — that is, requesting whatever single IP address should be used.

From getaddrinfo()'s perspective, it cannot be optimized, since the caller has requested any address to be returned. So the problem is really in dig and other DNS clients calling getaddrinfo() just to resolve a hostname. These clients are all suboptimal. gethostbyname() is optimal, but obsolete since it is not compatible with IPv6. There should be a resolving function which either returns the first IP address obtained, or returns both without blocking while waiting for the second. Clearly, each program cannot implement such a function itself. I do not know glibc, but a C library's API should allow such a resolution. If it doesn't, glibc has an issue too.

HTML/CSS - Centering

admin Wednesday November 11, 2015

Centering in CSS is not easy. But each time I must vertically center, I must search the web to convince myself that I have no choice other than using a hack. So I found it comforting to see this admission, coming from the W3C itself:

At this time (2014), a good way to center blocks vertically without using absolute positioning (which may cause overlapping text) is still under discussion.

Fully Free

Kune ni povos is seriously freethough not completely humor-free:

  • Free to read,
  • free to copy,
  • free to republish;
  • freely licensed.
  • Free from influenceOriginal content on Kune ni povos is created independently. KNP is entirely funded by its freethinker-in-chief and author, and does not receive any more funding from any corporation, government or think tank, or any other entity, whether private or public., advertisement-free
  • Calorie-free*But also recipe-free
  • Disinformation-free, stupidity-free
  • Bias-free, opinion-free*OK, feel free to disagree on the latter.
  • Powered by a free CMS...
  • ...running on a free OS...
  • ...hosted on a server sharedby a great friend for free