Among my colleagues and in the news, I've been seeing a recurring theme, that we must create a new system -- a universal tool -- to unify the world, eliminate security concerns, and increase productivity. To some, this tool is a universal language. The most recent, notable effort has been Wyvern, which is sponsored by the NSA. To others, the magical tool is the ability to combine all languages into one marvelous whole, an example being the Rusthon Markdown Compiler.
I am sure there are many other takes on what this universal tool might be, but the purpose of this article is not to document them. My intention is to provide an argument as to why these solutions aren't what you're looking for.
To begin, I will present to you the Go programming language. It is the result of intelligent design. It is an extremely simple language that's also concise and easy to read. It performs exceedingly well. The documentation is also very easy on the eyes. I will argue that it is both secure and productive, except for one thing, which is acceptance.
Regardless of how well the language was designed, Go isn't yet at the stage where there are sufficient numbers of people to review all the important codebases written with Go. Should anyone ever write a database, an operating system, a network stack, or encryption software with Go? The simple fact is that very few people are capable of checking it for security holes. Therefore, the system is not secure.
This notion has been described before, and it is known as Linus' Law. "The law states that 'given enough eyeballs, all bugs are shallow'; or more formally: 'Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone.'"
To maximize security, you must maximize the effect of Linus' Law.
There is a well-known example to this law, the Heartbleed security bug. According to Wikipedia, there were only four core developers on the system, and the change that introduced the bug was only reviewed by one.
A logical argument: A new language cannot have many users. If a language or system does not have a great number of users, it cannot qualify as secure because there are few people capable (and thus willing, likely) to check for, remove, and/or prevent security flaws. Therefore, a new language is not secure.
Therefore, any language being introduced today in the hopes of increasing security and productivity can only achieve its aims after years, or quite possibly decades, of existence.
Now that we've successfully managed to eliminate practically every system developed in the last decade from our secure list, let's look at some other solutions. Let's make a grand, unified language from multiple languages. Multiple languages can complement each other well. Some languages can do some things more securely than others, just as they can do things more productively than others.
Practically speaking, however, you're unlikely to hire anyone that knows all of the required languages for your project, and so you're candidate pool has been reduced with the addition of each new language.
Another logical argument: A multiple language project requires programmers that are competent in multiple languages. As the number of required languages increases, there are fewer programmers competent in all those languages. Therefore, fewer programmers are available to review and fix security flaws. Therefore, according to Linus' Law, the project is not secure.
A similar line of logic applies if you say you only need one programmer for each language. A greater number of programmers are required to review the code, but only a finite number of coders are available.
A traditional LAMP stack is certainly not ideal.