Loading...
 

Challenterests

We are privileged to live in a world where we witness progress going faster than ever before, which we can help to accelerate even further. Opportunities for innovations or simple improvements are countless. At the same time, our arsenal of tools makes our work more efficient than ever before. And—to top it off—a number of global catastrophic risks ensure our journey stays challenging and exciting.

Among all these interesting challenges, one has to choose. My interests include science and technology (operating systems, wikis, CMS-s, etcetera), organizations, economics, particularly environmental economics, and politics, including public policy, group decision making and resource allocation. I am concerned by environmental disruption and war.
Much has been written about these topics, and much remains to be written. This page shall explore some of them, briefly describing them (usually linking to further information) and explaining why they caught my attention. It explores innovations at the stage of diffusion, development or mere research, and a few topics whose future is still far from being clear.

Software

Wikis

A wiki is a type of website which not only displays pages, but offers readers to edit its pages, through their browser. Wiki pages are free-form, with features resembling those of word processors, making wikis very versatile tools. They can be used for publication, collaboration, documentation, research and design, project management and other purposes in all domains. This very page is a wiki page (no, you're not allowed to edit it, but suggestions are welcome).

By unifying the interfaces traditionally used to consult and modify a website, wikis minimize the barrier to contribution. Depending on configuration, the barrier may be so low that someone finding a wiki page from a search engine could read it, notice one typo, fix it, keep reading the page, then leave and never visit that wiki again. With this low barrier and the easy involvement from many contributors, a new and decentralized model of content production became practical (namely, peer production of content. Economical noteFor those familiar with the concept of peer production (as defined by Yochai Benkler), wikis address the problem of integration provision by minimizing the cost of integration and distributing integration among peers.. The Free content section discusses commons-based peer-produced content.).

Wikipedia was created in 2001 "to add a little feature to Nupedia", then a nascent Internet encyclopedia. In 2007, Nupedia had been shut down and de facto replaced by Wikipedia, which had become in 6 years the largest encyclopedia and one of the top ten most popular websites. "Wiki" does mean "fast" (in Hawaiian).

Quality

The revision history gives wikis soft security. Problematic changes are cheap to undo once they are identified. This mechanism is great, but sometimes insufficient. Some changes may cause too much damage before they are identified as problematic and reverted. Most wikis offer contributors who want to propose a change a single structured avenue: changing the page. Whether the contributor is confident or not, directly implementing a change has a certain risk (in practice, contributors who are not confident enough to "be bold" may write a comment or create a new (draft) wiki page, but these contributions may not be noticed). Software developers familiar with revision control systems can imagine an equivalent system in software - whenever you'd commit something, the change would be immediately live (released and deployed). You'd think twice before committing that apparently fine change, wouldn't you?

Some wikis are purely used as collaborative redaction tools. A "release process" is initiated at some point and ensures quality. In fact, Wikipedia was initially such a wiki. Each article was to be "released" to Nupedia when it had reached sufficient quality. But with the prohibitive integration costs coming from this "release process" (Nupedia and Wikipedia used 2 different systems without integration), only 24 articles were approved during Nupedia's 3 years of existence. The level of quality it required and the inefficient peer review system condemned it to remain unknown, and eventually to get eclipsed by the peer-produced Wikipedia. By creating Wikipedia, Nupedia effectively succeeded in increasing its productivity and becoming a success, but had to give up on its original primary goal, reliability.
Yet, peer production and peer review are not intrinsically mutually exclusive. There is interest in turning wikis into tools that not only allow great collaboration, but quality-ensured publication — without adding reviewing costs much higher than those necessarily required by the quality level sought.

Revision approval / Pending changes
The first step towards a workflow providing harder security is to allow more than a single current version for each page. Revision approval (or Pending changes on Wikipedia) introduces a last approved version, in addition to the last contributed version. For example, a page with 100 versions (all of them approved) could be edited by a new contributor. The contributor's version would not get automatically approved, but would rather be queued for approval. Site readers would keep seeing version 100 of the page, possibly with an indication that a new version was proposed, until version 101 would be approved.

This approach introduces quality using hierarchy. Only some editors are granted the privilege to approve pages. Wikipedia has already started using such a system, to some degree. This soft hard security was added to articles attracting moderately frequent vandalism on the English Wikipedia. The community is still debating if and how Pending changes should be used. On Wikipedia, contributors who want to review articles need to request permission from administrators. This workflow is based on MediaWiki's Flagged revisions extension. Tiki also supports revision approval.

Voting on changes
Although simple revision approval as described above is sufficient in several contexts, its use is not ideal particularly in organizations which are not hierarchical by nature, such as peer production enterprises. Keeping Wikipedia as example, it would make most sense to let everyone "approve" changes and - if needed - to simply require multiple approvals from any peer, instead of having to create a hierarchy of special contributors. Such an approach could amount to voting on changes. Voting is in fact not necessarily a hard security measure. A change could be accepted ("released") immediately, but be automatically "canceled" if opposed. For example, a page is at version 100 (approved) and user A saves new version 101. Version 101 goes live immediately, but still shows up on a list of recent changes. Users B and C both review the change and oppose it. By saving version 101, user A implicitly voted for that version. But with 2 votes (from B and C) against, there are now more votes against version 101 than for. Therefore, the change is considered rejected and the default version goes back to 100.

With classic wiki engines, the only way to tell the system you oppose change x is to revert it, which is itself a change from the new situation. If the author of change x disagrees with your reversion, his only way to record his opposition is to revert your change. As such, edit wars are to some extent a natural result from the limitations of classic wikis. Voting on changes would allow opposing without undoing, enabling more efficient and civil collaboration.

This innovation is actually already in use by Scholarpedia, an encyclopedia highly concerned with reliability. Most contributors cannot push a change without an approval by supporting votes from other contributors (software noteScholarpedia uses a modified version of MediaWiki, but its software development is currently done in private.).

Voting on changes is very promising, but wiki engines can go even further to support quality-ensured yet decentralized governance. I am not going to discuss all the interesting enhancements imaginable (where's the limit?), but those interested in the topic should find Bryan Ford's draft write-up on the topic interesting. This is the first proposal to apply liquid governance to wikis, to my knowledge.

Discussion groups

I have been involved in various discussion groups for over ten years. I like powerful discussion engines. I am not a big fan of mailing lists and usually prefer forums. But even forums have their issues. In particular, too much information…

Information overload and filtering

The Internet is full of information. In one sense, this constitutes a problem. "Too much information - not enough time". This is particularly true in large discussion groups, where it takes time for readers to follow. Many open groups have messages with a very uneven value. Some messages should not be missed by anyone, while others are only worth reading for a few members.

The problem of information overload was already discussed in 1998. Jacob Palme's Information filtering explains that the average message takes 4 minutes to be written, and half a minute to be read. Therefore, starting from 8 readers, a message takes more time from readers than from its writer on average. How do communities of thousands of members scale their discussion groups to maintain the value of reading?

Traditionally, Internet forums and mailing lists simply rely on a group of moderators to approve messages (or eliminate messages deemed unhelpful, when approval is not required). This solution, which still largely prevails, suffers from a high centralization. It also suffers from a high subjectivity - where is the line between a good and a bad message? Usually, this results in a very limited moderation, which only filters destructive messages, such as spams. By eliminating "bad" messages, that form of moderation only starts to address information overload. A serious solution requires prioritization of interesting messages.

A small number of discussion groups go further and allow to rate messages. For example, on Slashdot, comments accumulate a score, which is additionally bounded to the range of -1 to 5 points. This system does not prevent any reader from accessing any comment, but orients a reader better on which comments are more important (by recording how interesting everyone found each comment). When viewing the site, a threshold can be chosen from the same scale, and only comments with a score meeting or exceeding that threshold will be displayed.
Moreover, each user has karma, which starts neutral and can go from -1 (low) to 2 (high). When a user's comment is rated up or down, the user's karma increases or decreases (respectively). A comment's initial score depends on its author's karma. Therefore, rating a comment indirectly sets its author's reputation (and consequently the score of the author's future comments).

Collaborative filtering recommender systems go even further. In an open community, collaborative filtering largely replaces moderators in a way which is more distributed, allowing all users to rate messages. Such a distribution not only decentralizes message prioritization, it also makes it more neutral. In fact, collaborative filtering (in the modern sense) allows a prioritization better than neutrality. It allows a personalized bias. The importance of each message is estimated based on the evaluations of readers with tastes similar to yours.

Unfortunately, collaborative filtering is still virtually unused in discussion groups. Discussion group engines do not support it, and have limited support for other forms of advanced message prioritization techniques. As of 2012, this represents a major opportunity for collaborative communities, in particular for large-scale peer production.

Those who want to go further on this topic may appreciate Clay Shirky's Group as User: Flaming and the Design of Social Software, which treats the specific problem of "flaming", or inappropriate communication. The essay presents it as an instance of the economic problem of the tragedy of the commons. Although this is far-fetched, it highlights the role played by egoism in flaming. Identifying egoism as a root cause of flaming makes it clear that some solutions such as Netiquette will not give satisfying results if used alone.
However, I disagree about a key part of the essay concerning the source of the current issue. I think the main reason why current discussion group engines are so vulnerable to flaming is not software designers, but the technical difficulty of implementing solutions inside the paradigm of mailing lists (which is what Shirky talks about).

Economics

From http://commons.wikimedia.org/wiki/File:Emblem-money.svg
GPL With the development of computers, the need for software grew. Software production started to require important resources and investments in the 1960s. Edsger Dijkstra observed this phenomenon as early as 1972 (The Humble Programmer, Communications of the ACM):

The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.

The software industry was already above 930 billion US$/year in 2020. Software is thankfully a non-rivalrous good. This should have resulted in abundant quality software, but economics and politics decided differently. How could the talent needed to write quality software be secured? When software couldn't keep being simply bundled with hardware, proprietary software appeared.

Proprietary software

Proprietary software works on a classic business model for content, where a firm produces content and - as its copyright holder - licenses the content to its customers. Potential users need to pay to use the software. If the producer sells enough licenses, the fixed cost of content production is covered. This model, which treats software as a club good, has several problems.

From the producer perspective, since software has low excludability, it is often "stolen". Some "copy protection" schemes can complicate piracy, but are difficult to impose without creating issues for legitimate users.

From the consumer perspective, the dependency on vendors (vendor lock-in) is the main problem. For example, you buy word processor A for 100$ and use it to redact documents for a project. When the project grows, you hire someone to help you. Since the employee needs to work on your documents, you need to get a word processor for him too. A new company just released the much-anticipated word processor B, which is better than A, yet priced at a mere 50$. You would buy B, but since you already have documents in the format used by A, which is not the same as B's format, you couldn't work on the same documents as your recruit. Therefore, you buy a new license for A, now priced at 109$. Vendor lock-in on the network effect just cost you 59$, and you're all still using an inferior word processor.

You could instead have bought two copies of B and stopped using A. This way, you would save 9$ and improve your own word processor. The issue is you would incur switching costs, having to convert your documents from A's format to B's format.

Lack of interoperability is only one source of lock-in. Since users do not buy the software, but a license to merely use it (usually...In some cases, proprietary software is not closed source. This is very rare, in particular since keeping the source secret enhances excludability (making piracy harder) and prevents competitors from copying the product's source code.), when a user finds an issue (such as a bug or a missing feature) in proprietary software, the user cannot solve the issue without the vendor's collaboration. If the issue has not been addressed, the user has to rely on the vendor to address it and then obtain a new version.

From a general perspective, selling licenses for software does not represent "actual" production costs (marginal costs), since producing software only has a fixed cost (if distribution costs are ignored). Charging for licenses creates artificial scarcity, which causes deadweight loss. As with other non-rivalrous goods, no pricing strategy can finance an optimal production level. Flat pricing is inefficient, and value-based pricing is difficult. This is particularly true for software, whose utility to each customer is very variable. The extent of the problem can be considered a market failure.

Free software

Definition
Free software is software with a license that waives the privileges of its copyright owners. The software's source is available and copyright does not prevent doing much−sometimes anything−with it. Such software can be modified and shared by anyone, rather than only used by those with a license. In contrast with (most) proprietary software, a user who finds an issue in free software is allowed to get the issue addressed by anyone, even himself or herself.

Economic viability
Free software can be obtained without payment, so its production cannot follow a traditional business model. Free software is usually developed by organizations and private individuals simply because they need software.

For example, Fooish government may have needed a text editor and created one that supports boldface. Then, a firm in visual design needs a text editor that supports colors. The firm can't afford creating a text editor. However, if Fooish government made its editor available freely, the firm may have the budget to add support for color. If the firm contributes back that support, Fooish government may profit from having offered its editor freely in the end, as it got color support for free. If more users with new needs get involved, Fooish government may eventually become just one member of a diverse group of developers.

Free software can be developed in this fashion, but it takes time. The entity starting the project will require a huge need for the software to do the initial investment alone. Clearly, the project could be started a lot more easily if all potential users would agree to share the costs. The project's value for all users may easily be orders of magnitude above its value to any single user. Such a model does work, but not as much as it should. It suffers from the problem of free riding. Even though free software often has an enormous benefit-cost ratio from a global perspective, few people are willing to be the ones who pay for a majority of free riders. In fact, some users may consider helping other users as an actual issue (often, a firm would be willing to pay to hurt competitors). Since offering software for free effectively prices it under its value to any consumer, the resulting consumer surplus (so to speak) constitutes a positive externality of free software production. Therefore, free software cannot naturally reach an optimal production level.

Some attempt to compensate that by giving to free software projects. Organizations and individuals donate various resources including money, which helps with material expenses and which is sometimes used to remunerate the work of developers. More importantly, it's labor that's donated. In fact, contributions are often motivated by both self-interest and philanthropy. By contributing directly with improvements, contributors can control how their donations are used.
Unfortunately, these efforts obviously cannot suffice to bring free software to an optimal production level.

Conclusion

The process of application development is simply too fragmented at this point; [...] The single most defining characteristic of today’s infrastructure is that there is no single defining characteristic, it’s diverse to a fault.

Stephen O'Grady, The Developer Experience Gap

In short, production of proprietary software is inefficient and production of free software is insufficient. Competition has created a superabundance of software products, but no abundance of quality software products. The failure of both paradigms and our neglected and chaotic software ecosystem are not unrelated. Reacting to Heartbleed's discovery, Dan Kaminsky observed "We are building the most important technologies for the global economy on shockingly underfunded infrastructure." Paul Chiusano also used Heartbleed as an example of the results of failed software economics, but I am skeptical of the solution he suggests. Heartbleed is also at the heart (pun intended; excuse the exaggeration) of the Ford Foundation's Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure. The cost of poor software quality is estimated higher than a trillion USD/year, in the USA alone.

Some describe software as an anti-rival good, meaning that sharing a piece of software not only maintains its value to existing users, but increases it. Hence, software could be called a more-than-natural monopoly, in the sense that more competition on a software good diminishes its utility.

Could something be done, at least for economically-critical software? There is no easy solution, but Carlos del Cacho suggests a possible avenue after looking more deeply at the economics of software in The Economics of Software Products: an Example of Market Failure.

Software economics - Public goods also analyses the economic problems of software and details the free rider problem.

Yochai Benkler wrote an excellent economic analysis of the possible production modes of information products, which applies to software. Chapter III of Coase's Penguin, or Linux and the Nature of the Firm explains the economical model of free information products (including free software) in much more depth.

Free content

Free content is content (for example, text or multimedia content) which is free, in the same sense that free software is free. Producers of free content basically renounce their exclusive rights (copyright) on their content. The best known free content source today is Wikipedia.

As a consumer of free content, you enjoy the same liberties which are usually the privilege of producers. You can therefore freely use the content, touch up, update, expand or generally modify it, store it and share it. You can also promote the content with the same assurance as the creator, since you are truly promoting the content, not its owner, nor saying that you consider a temporary offer of that content as good value. You are purely recommending something, not being a marketing tool.

This model blurs the line between content producers and consumers. Contents that provide information about a topic are often written by experts and targeted at less knowledgeable readers. Sometimes, an expert is so far from its audience that his writings may have low accessibility. By allowing consumers to modify the content, the content can easily be made more accessible.

Image This website is itself free content (see the license for details). If you share interests, you can reuse this page. If your perspective is slightly different, you are free to adapt the content and to publish the modified version. There is no need to request permission.

Open standards

Conventions and standards are at the basis of civilization. Indeed, history begins with writing, and before the first pre-historic civilizations, spoken language was already Homo sapiens's most powerful tool.

As standards create network effects, in today's information society, adopting quality standards is essential. One of the greatest qualities of standards is openness (in particular, free access to documentation). Open standards allow interoperability between computer systems. In contrast, use of opaque standards is one of the main ways proprietary software vendors create vendor lock-in.

Some open standards

Some open standards already exist but are not adopted everywhere. Others still need development…

ISO 8601 - Numeric representation of dates and time

Have you ever wondered if it's safe to eat food with an expiry date of, say, "10 01 11"? If you consumed food since year 2000, chances that you did are quite high. Such ambiguous date representations led to the development of ISO 8601, which standardizes the YYYY-MM-DD format. For some reason, food manufacturers apparently didn't get the news yet.

SI, the International System of Units

Everyone knows something about the SI, but some regions of the world know less about it than others. In particular, the United States and Canada continue to use imperial units. Although measurement systems don't have to be very complex, the use of imperial units alongside SI units can be complex, and using different measurement systems has huge costs. However, changing measurement systems is also far from easy. The topic is not particularly well documented, but Canadians interested in change management should find Wikipedia's article on Metrication in Canada to be very interesting.

In Canada, it seems more education and political will is needed to make further progress.

Not quite there yet: Common human language

From http://en.wikipedia.org/wiki/File:Conlangflag.svg Natural languages did not appear in one day. In fact, they appeared over several generations, and in separated communities (in real fact, they're not even done appearing). The result is that although language is universal in the sense that it is ubiquitous, no single language is universally known. There are thousands of living human languages today. The odds that 2 persons picked at random share the same native language are quite small.

In our globalized information society, communication between everyone is extremely important. The traditional solution to this problem has been to learn a number of second languages to be able to communicate with speakers of more languages. However, as population mobility and international collaboration increased, as telecommunication improved, and as linguistic diversity started to be appreciated, the scalability problem of this approach has become evident, and the idea of a universal auxiliary language was born.

Such a language would ideally be the only second language everyone would have to learn. It would be the lingua franca of multilingual communities. Technically, the language chosen should be as easy to learn as possible. Consequently, the language should be as close to most natural languages as possible, so people learning it would find it as little different from their native language as possible.

Projects to make such a language a reality started as early as the first half of the 19th century. Constructed languages were created, based on natural languages, but trying to be as regular and close to most natural languages as possible. The 19th century saw the creation of Volapük, and the more practical Esperanto. Most of the work on constructed languages happened in the 20th century, with the creation of Ido, Occidental, Novial and Interlingua. However, as of 2011, there is still no agreement on which language should be adopted as universal auxiliary language… nor any agreement in sight. In fact, such a language may not yet exist.

See also: My involvement in Ido

Digital identities

The lack of good identity systems has been a problem since the Web appeared, and even before. A specific problem in the digital world is the proliferation of credentials. Originally, each service provider would require its users to create credentials for an account. This is still the usual scenario today. I have a login and a password for my email provider, my instant messaging providers, the development websites I contribute to, wikis, forums, etc. This situation makes it very difficult to remember all credentials. Storing credentials or re-using them for different providers creates security problems. On the other hand, choosing different credentials for all providers and trying to remember them inevitably causes us to forget some from time to time.

OpenID is an open standard which allows service providers to delegate authentication to identity providers. Users can therefore choose - if they wish - a single identity provider and - if they wish - a single set of credentials to authenticate to their identity provider. Thus OpenID allows users to consolidate their digital identities.

In December 2009, there were over 1 billion OpenID enabled accounts on the Internet and approximately 9 million sites had integrated OpenID consumer support. You probably have one or several OpenID accounts unknowingly.

Reputation

With the information age, an enormous quantity of information has become available to all. But modernity has not multiplied our lifespans by a factor nearly as large. We have many more choices to make, way more options, but we still have very little time to make them. Which song should we listen to next, among the tens of millions available? Which book is most worthy of our time? Which candidate would be best to replace a retiring teacher? How can you tell if the White House's website is as trustworthy as this one?

And did that remain true after the United States' voting system granted control of the White House to a republican reputed as a better entertainer than decision-maker? The recent popularization of new cults highlights that reputation is variable, for both organizations and individuals. Making it that much more difficult for our limited minds to properly determine trust, in a world which now has billions of sources whose reliability constantly varies. Most importantly, these phenomena highlight the critical importance of trust; sufficiently trusting reputable sources, and not overly trusting sources which are−or have become−unreliable.

The reputation age we are entering already provides numerous digital tools to help us evaluate an option's quality. We also have several systems specifically designed to rate choices. But the ongoing decision-making revolution will go much further with systems tracking customized reputational paths of all kinds of options. Allowing, hopefully soon, to evaluate the trustworthiness of software systems.

Globally verifiable identity

At the moment, it's impossible for global online projects to both offer anyone to contribute and to verify that each contributor is a distinct person, which means open projects have problems with sockpuppets. Similarly, no entity may offer online voting to anyone and be sure that each vote comes from a unique voter.

The same problem affects businesses. A musician with a new album may want to offer each person to get one song of their choice for free. There is no system so far allowing such a musician to be sure everyone can download one song and at the same time make sure that nobody can download 2 songs.

Some states have national identification numbers, but there is no global system yet. A global trusted unique authentication authority would be a major help to the online world—in collaboration, decision making and spam fighting.

If you are aware of projects to make this happen, please contact me.

Governance

Collective decision making

I have given my biggest interest, liquid governance, its own page.

Single-winner votes: Condorcet voting

Condorcet by Jacques Perrin. Credits to ℍenry Salomé
Condorcet by Jacques Perrin. Credits to ℍenry Salomé
Votes usually determine a single winner among any number of options. By far, the most common voting method in such cases is plurality voting, where each voter "votes for" a single option. When just 2 options are available, the collective preference is basically clear. With 5 voters, if any of the 2 options gets 3 votes, then that option is preferred by most voters. In such cases, for example Yes or No referendums, plurality voting is all that's needed.


However, with more options, plurality voting can generate seriously incorrect results. For example, imagine options A, B and C and the following preferences for 7 voters:

  • 2 voters prefer A to B and B to C.
  • 2 voters prefer B to A and A to C.
  • 3 voters prefer C to A and A to B.


If these voters do not vote strategically, first-past-the-post will give 2 votes to A, 2 votes to B and 3 votes to C, so C will win, having the most votes. However, 4 voters prefer A to C, while only 3 voters prefer C to A. Therefore, A should be a better collective choice than C.

Preferential voting allows voters to express their preferences between options much better than plurality voting. Condorcet methods are preferential voting methods which give results at least as accurate as plurality voting, and generally much more accurate results when more than 2 options are available. The tallying method for Condorcet votes is significantly more complex than the tallying method for plurality voting. Despite being several centuries old, Condorcet voting has not been used in governments yet (as of 2011). Plurality voting simply counts the number of votes each option receives. As for Condorcet voting, it works with a table representing the number of preferences expressed for each option over each competing option. The tallying method then eliminates the option which loses against all other options, then the option which loses against all other remaining options, and so on… The good news is that with the advent of computers and electronic voting, the tallying method's relative complexity is now irrelevant, so Condorcet voting now has no practical disadvantage in comparison to plurality voting. Condorcet voting recently started being used by organizations with well-educated members, including NGOs with medium or large sizes such as the Debian project, KDE e.V. and the Wikimedia Foundation. Its use to elect representatives is an easy and quickly achievable improvement in politics which will have a major impact in the quality of governance. More public awareness is the only bit missing.

Competition

Competition can be as positive as destructive. Some forms of competition provide benefits just as important as those of cooperation. In fact, competition between individuals can be considered as the driver of natural selection, and therefore, as the primary driver of evolution.

Corporate competition

One pillar of the unprecedented development we witnessed during the previous centuries is the appropriation of means of production: capitalism. Competition between corporations has always been a fundamental element in capitalism. Market economies better reward innovation and investment, converting selfishness into wealth.

Unfortunately, such private wealth creates jealousy. As the world gets more complex, network effects increase and so does the size of corporations. The more wealth there is, the less likely it is that wealth is evenly distributed. And the less equal everyone's wealth is, the more people are jealous of those who create the greatest companies.

More than a century ago, several capitalist states had progressed enough to create huge inequalities. Several states managed the resulting jealousy creating regulation punishing those who create the most successful businesses. While the consequences on innovation and competition have been known for decades, most developed countries still discourage serious investments with such laws rather than properly redistributing wealth. Although it may be unsurprising to see countries imposing fines on foreign corporations to collect extra taxes on businesses which can afford them, continued application of these laws, sometimes against their own companies, has reached new farcical highs, with unsustainable excesses now serious enough for high-profile victims to feel required to remind their own countries that the Western world is not competing aloneIn 2020, a testimony by Facebook's CEO to the USA's House antitrust subcommittee had to explain that some economies previously very far from free markets may now be threatened by much less uncertainty than those of developed states.. In democratic states, the right rhetorics (pretending that a law diminishing competition actually increases it) and short-termism can be sufficient ingredients for successful populism.

I advocate fixing legislation to enable fair competition between corporations. The Case against Antitrust Law makes an excellent case in the context of the United States of America.

None of this is to say that enormous producer surplus is never a sign of something wrong. I've blogged about how possibly problematic practices such as vendor lock-in can be addressed constructively using Google's Android as an example.

Destructive competition and free riding

As important as allowing constructive construction is, preventing negative forms of competition—or "free riding"—is capital. A free rider is someone who benefits from a situation as much as other people, but without contributing to that situation as much as others. For example, someone using a counterfeited bus ticket is a free rider compared to most bus users, who paid for their tickets.

A free rider is not necessarily a criminal. For example, if one forgets to buy bus tickets one day and is allowed in the bus anyway, he's riding without having had bad intentions. Free riding is not a problem per se; it is simply, like inequality, a symptom of something suboptimal. The free riding problem "is the question of how to limit free riding (or its negative effects)". In the bus case, free riding may simply be prevented by switching to electronic tickets. Some cases aren't as easy.

When a natural resource is overused, the obvious solution is to limit everyone's use of that resource. The polluter pays principle allows implementing such limits fairly efficiently. For example, if fresh water is limited, water meters can be installed on houses, replacing constant water taxes, to address each household's temptation to ride "free". But what happens if the limited water source is fresh water from a river separating two countries? How can one country willing to use water with care prevent the other country from freeriding?

As we saw in recent history, there are no realistic ways for a country to prevent its competitors from freeriding. The only realistic solution is to get rid of interstate competition…

Unity

It is obvious that no difficulty in the way of world government can match the danger of a world without it.

Carl Van Doren, The Great Rehearsal

Today's world is very far from being united. The United Nations, which is the closest we have from a world government, neither has an appropriate structure nor significant authority. In reality, the world consists of a couple hundred of competing countries. Several of these countries are themselves federations divided in even more federated states (sometimes called provinces) each having some sovereign power. Only 3 of the 10 largest countries (by area) are unitary states.

Reducing the number of states enables a better definition of property, and prevention of bottom-racing. These improvements allow an elimination or reduction of negative externalities. Larger governments also provide the best context for production of public goods.
It is not clear whether the world is getting more united, but if so, the pace of union is much slower than the pace of progress in other areas, and very slow compared to the speed of environmental degradation.

From http://commons.wikimedia.org/wiki/File:Peace_button_large.png
Creative Commons CC0 1.0 Universal Public Domain Dedication Thankfully, we are at least apparently somewhat aware of this problem. According to the Thomas Reuters Foundation, 70% of us think a new more powerful global organization should be created.

Life

I am interested in biology, species and their interactions, evolution as well as human physical and mental health. I am particularly interested in the acquired and innate determinants of variations in individual intelligence and altruism. I am happy to see neuroscience apparently making great progress regarding intelligence. But I am hoping society can also find ways to maintain or boost levels of altruism, in particular after research suggesting that complex societies were enabled by theism (or memes which favor cooperation).

Philosophy

By the way, why should we care about all this? Should we?

When I was introduced to physics, I adopted metaphysical naturalism. For a short time, I was an existential and moral nihilist.

I changed my mind (no pun intended) when I realized physicalism's explanatory gap. While I am no longer a physicalist, I remain a methodological naturalist. I am hoping that the hard problem of consciousness will have a solution (if you have one (but not 42), please tell me). But I'm happy to let neuroscience solve that… someday… hopefully. The vast majority of my thought on neuroscience is summarized in the 52-minute French documentary La fabrique du cerveau.

Meanwhile, I am blindly adopting classic utilitarianism (universal hedonism).


Created by admin. Last Modification: Saturday September 30, 2023 02:15:41 GMT-0000 by admin.

Fully Free

Kune ni povos is seriously freethough not completely humor-free:

  • Free to read,
  • free to copy,
  • free to republish;
  • freely licensed.
  • Free from influenceOriginal content on Kune ni povos is created independently. KNP is entirely funded by its freethinker-in-chief and author, and does not receive any more funding from any corporation, government or think tank, or any other entity, whether private or public., advertisement-free
  • Calorie-free*But also recipe-free
  • Disinformation-free, stupidity-free
  • Bias-free, opinion-free*OK, feel free to disagree on the latter.
  • Powered by a free CMS...
  • ...running on a free OS...
  • ...hosted on a server sharedby a great friend for free