In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Most people do not understand the knowledge points of this article "what is the essence of Lisp", so the editor summarizes the following content, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "what is the essence of Lisp" article.
Re-examine XML
The highest eminence is to be gained step by step. Let's start with XML as our first step. But XML has said more, what new meaning can be said? Yes, there is. XML itself is not interesting to talk about, but the relationship between XML and Lisp is quite interesting. The concepts of XML and Lisp have striking similarities. XML is the bridge to our understanding of Lisp. Well, let's think of XML as a live horse doctor. Let's grab our walking sticks and explore the untouched wilderness of XML. We need to look at the subject from a new perspective.
On the face of it, XML is a standardized syntax that expresses arbitrary hierarchical data (hirearchical data) in a format suitable for human reading. Such as to-do list, web pages, medical records, car insurance policies, configuration files, etc., are all places where XML uses force. For example, let's take the task list as an example:
Clean the house.
Wash the dishes.
Buy more soap.
What happens when you parse this data? How is the parsed data represented in memory? Obviously, it is appropriate to use trees to represent this hierarchical data. In the final analysis, XML, a relatively easy-to-read data format, is the result of serialization of tree-structured data. Any data that can be represented by a tree can also be represented by XML, and vice versa. I hope you can understand this, which is extremely important to the following content.
One more step. What other types of data are often represented by trees? There is no doubt that list is also a kind of. Have you ever taken a compilation class? Still vaguely remember a little bit? The source code is also stored in a tree structure after parsing, and any compiler will parse the source code into an abstract syntax tree, which is appropriate because the source code is hierarchical: the function contains parameters and code blocks. the code quickly contains expressions and statements, statements contain variables and operators, and so on.
We already know that any tree structure can be easily written into XML, and any code can be parsed into a tree, so any code can be converted to XML, right? Let me give you an example. Look at the following function:
Int add (int arg1, int arg2)
{
Return arg1+arg2
}
Can you change this function to the equivalent XML format? Yes, of course. We can do this in many ways, and here is one of them, which is very simple:
Arg1
Arg2
This example is so simple that it won't be a big problem to do it in any language. We can convert any program code to XML, or we can convert XML back to the original code. We can write a converter that converts Java code into XML, and another converter converts XML back to Java. By the same token, this method can also be used to deal with C++ (it's like going crazy. But someone is doing it. Just look at GCC-XML (http://www.gccxml.org)). Furthermore, all languages with the same language features but different grammars can use XML as an intermediary to convert code to each other. In fact, almost all mainstream languages meet this condition to some extent. We can use XML as an intermediate representation to decode each other between the two languages. For example, we can use Java2XML to convert Java code into XML, and then use XML2CPP to convert XML into C++ code. With luck, that is to say, if we are careful to avoid using the Java features that C++ doesn't have, we can get a good C++ program. How about this? isn't it beautiful?
All this fully shows that we can use XML as a general way to store the source code, in fact, we can produce a set of program languages that use a unified syntax, and we can also write a converter to convert the existing code into XML format. If this approach is really adopted, compilers of various languages will not have to write their own syntax parsing, they can directly use XML syntax parsing to generate abstract syntax trees.
Speaking of which, it's time to ask, we've been studying XML for a long time. What does this have to do with Lisp? After all, by the time XML came out, Lisp had been around for 30 years. I can assure you here that you will understand in a minute. But before we go on to explain, let's do a little thought exercise. Take a look at the example of the add function in the XML version above. How do you classify it, code or data? You don't have to think too much about it, and it makes sense to put it into any category. It is XML, which is data in a standard format. We also know that it can be generated through a tree structure in memory (that's what GCC-XML does). It is saved in a non-executable file. We can parse it into tree nodes and then do any transformation. Obviously, it is data. But wait a minute, although its syntax is a little strange, it is really an add function, right? Once parsed, it can be taken to the compiler for compilation and execution. We can easily write the XML code interpreter and run it directly. Or we can translate it into Java or C++ code and then compile and run it. So, it's also code.
Where were we? Yes, we have found an interesting key point. The concept that was thought to be difficult to understand in the past has been very intuitive and simple. Code is also data, and it always has been. It sounds crazy, but in fact it is inevitable. I promised to interpret Lisp in a completely new way, and I would like to reiterate my promise. But we haven't arrived at the scheduled place at the moment, so let's continue the above discussion.
As I said just now, we can easily implement the XML version of the add function interpreter, which sounds like just talk. Who would really do it? Not many people will take this matter seriously. By the way, I'm not really going to do it. I'm afraid you'll encounter such a thing in your life. You know what I mean, did I move you? Yeah, well, let's move on.
Re-examine Ant
Now that we have come to the backlit side of the moon, don't leave in a hurry. Explore again and see what else we can find. Close your eyes and think about that rainy night in the winter of 2000, when a brilliant programmer named James Duncan Davidson was working on Tomcat's servlet container. At that time, he was carefully saving the newly modified file and then executing make. As a result, there were a lot of mistakes, and it was obvious that something was wrong. After careful examination, he thought, could it be that the command could not be executed because the tab was preceded by a space? That's true. He's really had enough of this all the time. Inspired by the moon behind the dark clouds, he created a new Java project and wrote a simple but very useful tool that cleverly used the information in the Java properties file to construct the project. Now James can write an alternative to makefile, which can play the same role, and the form is more beautiful, and you don't have to worry about the hateful spaces like makefile. This tool automatically interprets the properties file and then takes the correct action to compile the project. It's so simple and beautiful.
(author's note: I don't know James, and James doesn't know me. This story is based on online posts about the history of Ant.)
In the months after using Ant to construct Tomcat, he felt more and more that Java's properties files were insufficient to express complex construction instructions. The file needs to be checked out, copied, compiled, and sent to another machine for unit testing. If something goes wrong, email the person concerned, and if successful, continue to perform the construction on the highest possible volume (volumn). At the end of tracking, the volume will return to its original level. Indeed, Java's properties files are not enough, and James needs a more flexible solution. He doesn't want to write his own parser (because he prefers an industry-standard solution). XML seems like a good choice. It took him a few days to port Ant to XML, and a great tool was born.
How does Ant work? The principle is very simple. Ant gives the XML file containing construction commands to a Java program to parse each element (code or data, you think about it). The actual situation is much simpler than I said. A simple XML instruction causes a Java class with the same name to load and execute its code.
The meaning of this text is to copy the source directory to the target directory, and Ant will find a "copy" task (actually a Java class), set the appropriate parameters (todir and fileset) by calling the method of Java, and then perform this task. Ant comes with a set of core classes that can be extended at will by the user, as long as several conventions are followed. Ant finds these classes and executes the code whenever it encounters a XML element with the same name. The process is very simple. Ant does what we said earlier: it is a language interpreter that uses XML as the syntax to translate XML elements into appropriate Java instructions. We can write a "add" task and then execute the add task when we find that there is an add description in the XML. Because Ant is a very popular project, the strategy shown above is more sensible. After all, this tool is used by almost thousands of companies every day.
So far, I haven't talked about the difficulties Ant has encountered in parsing XML. You don't have to bother to look for answers on its website, and you won't find anything of value. At least for our topic. Let's move on to the next step. Our answer is right there.
Why XML?
Sometimes the right decision is not entirely out of deliberation. I don't know if James chose XML out of deliberation. Maybe it was just a knee-jerk decision. At least from James's article on the Ant website, the reason he said is completely specious. His main reasons are portability and extensibility, which I don't see any help in the Ant case. What are the benefits of using XML instead of Java code? Why not write a set of Java classes, provide api to meet the basic tasks (copying directories, compiling, etc.), and then call the code directly in Java? This still ensures portability and scalability is no doubt. And the grammar is more familiar, it looks pleasing to the eye. Then why use XML? Is there a better reason?
Yes, there is. Although I'm not sure James did realize it. In terms of semantic constructability, the elasticity of XML is unmatched by Java. I don't want to frighten you with unfathomable nouns. The reason is quite simple and doesn't take much effort to explain. All right, get ready, and we're about to make a leap towards the moment of epiphany.
How can the above example of copy be implemented in Java code? We can do this:
CopyTask copy = new CopyTask ()
Fileset fileset = new Fileset ()
Fileset.setDir ("src_dir")
Copy.setToDir (".. / new/dir")
Copy.setFileset (fileset)
Copy.excute ()
This code looks similar to the one in XML, but a little longer. What's the difference? The difference is that XML constructs a special copy verb, which would look like this if we had to write it in Java:
Copy (".. / new/dir")
{
Fileset ("src_dir")
}
See the difference? The above code (if you can use it in Java) is a special copy operator, a bit like a for loop or a foreach loop in Java5. If we had a converter that could convert XML to Java, we would probably get the above virtually unexecutable code. Because the technical specification of Java is fixed, we have no way to change it in the program. We can add packages, classes, and methods, but we can't add operators, and for XML, we can obviously allow ourselves to add such things. For the syntax tree of XML, we can add any element as long as it is intended, so it means that we can add operators at will. If you don't quite understand, take a look at the following example and add that we want to introduce a unless operator to Java:
Unless (someObject.canFly ())
{
SomeObject.transportByGround ():
}
In the above two examples, we intend to extend two operators to the Java syntax, the group copy file operator and the conditional operator unless. If we want to do this, we must modify the abstract syntax tree that the Java compiler can accept. Obviously, we can't use the functions of the Java standard to implement it. But we can do it easily in XML. Our parser generates an abstract syntax tree based on the XML element, thus generating operators, so we can introduce any operator at will.
For complex operators, the benefits of this are obvious. For example, imagine how wonderful it is to use specific operators to check out source code, compile files, unit tests, send e-mails, and so on. For specific topics, such as constructing software projects, the use of these operators can significantly reduce the amount of less code. Increase code clarity and reusability. Interpretive XML can easily achieve this goal. XML is a simple data file that stores hierarchical data, and in Java, because the hierarchy is fixed (as you'll soon see, the situation in Lisp is very different), we can't achieve this goal. Maybe that's what makes Ant successful.
You can pay attention to the recent changes in Java and C # (especially the technical specification of C # 3.0). C # abstracts common functions and adds them to C # as operators. The newly added query operator in C # is a case in point. It uses the traditional approach: the designer of C # modifies the abstract syntax tree and then adds the corresponding implementation. If only the programmer could modify the abstract syntax tree himself! That way we can construct sublanguages for specific problems (such as Ant, the language used to construct projects). Can you think of any other examples? Think about the concept again. But you don't have to think too much, we'll come back to this topic later. Then it will be clearer.
Getting closer and closer to Lisp
Let's put the operator aside and consider something beyond the limitations of Ant design. As I said earlier, Ant can be extended by writing the Java class. The Ant parser matches the XML element and the Java class by name, and once a match is found, it performs the corresponding task. Why not use Ant to extend Ant itself? After all, core tasks contain a lot of traditional language constructs (such as "if"), and if Ant itself provides the ability to construct tasks (rather than relying on java classes), we can get more portability. We will rely on a set of core tasks (call it a standard library if you want), regardless of whether there is a Java environment or not. This set of core tasks can be implemented in any way, while other tasks are built on top of this set of core tasks, so that Ant will become a universal, extensible, XML-based programming language. Consider the possibility of the following code:
If XML supports the creation of "task", the above code outputs "Hello World!". In fact, we can write a "task" task in Java and extend it with Ant-XML. Ant can write more complex primitives based on simple primitives, as is commonly done in other programming languages. This is the XML-based programming language we mentioned at the beginning. It's not very useful (you know why?), but it's really cool.
Let's take a look at the Task mission we just talked about. Congratulations, you are looking at the Lisp code! What did I say? Isn't it anything like Lisp? It doesn't matter. Let's clean it up again.
Better than XML
As mentioned in the previous section, Ant self-expansion is of little use because XML is cumbersome. This is not a big problem for the data, but if the code is cumbersome, the trouble with typing alone is enough to offset its benefits. Have you ever written a script for Ant? I've written that XML is very tiresome when the script reaches a certain complexity. Come to think of it, you have to type every word twice in order to write a closing tag. It's good not to go crazy!
In order to solve this problem, we should simplify the writing. Remember, XML is just a way to express hierarchical data. We don't have to use angle brackets to get the serialized results of the tree. We can totally use other formats. One of these formats (which happens to be used by Lisp) is called an s expression. The s expression should be done the same as XML, but its advantage is that it is easier to write, and the simple way to write is more suitable for code input. I'll talk about s expressions in more detail later. I need to clean up XML's stuff before that. Consider an example of copying files:
Think about what the parsing tree of this code will look like in memory. There will be a "copy" node with a "fileset" node under it, but where are the attributes? How does it express it? If you've used XML before and don't know whether to use elements or attributes, you don't have to feel lonely, everyone else is just as confused. No one really knows. This choice is not so much based on technical reasons as blindly touching with closed eyes. Conceptually, an attribute is also an element, and anything that belongs to performance can be done as well. The reason why XML introduces attributes is to make XML writing less lengthy. For example, let's look at an example:
.. / new/dir
Src_dir
By comparison, the amount of information in the content is exactly the same, and the amount of typing can be reduced by using attributes. If XML has no attributes, typing alone is enough to drive people crazy.
When we're done with attributes, let's take a look at s expressions. The reason for this bend is that the s expression has no concept of attributes. Because the s expression is very concise, there is no need to introduce attributes at all. We should keep this in mind when converting XML to s expressions. For an example, the above code is translated into an s expression that looks like this:
(copy
(todir ".. / new/dir")
(fileset (dir "src_dir")
Take a closer look at this example. What's the difference? Angle brackets have been changed to parentheses, and each element was originally surrounded by a pair of parenthesis marks, but now the latter (that is, the one with slashes) is removed. It only needs a ")" to indicate the end of the element. Yes, that's the difference. The transformation of these two expressions is very natural and simple. S expressions are much easier to type. When you first look at the s expression (Lisp), the parentheses are annoying, aren't they? Now that we understand the truth behind it, it becomes much easier at once. At least, it's much better than XML. Writing code with s expressions is not only practical, but also enjoyable. The s expression has all the benefits of XML, which we just discussed. Now let's take a look at a more Lisp-style example of task:
(task (name "Test")
(echo (message "Hellow World!"))
(Test)
In Lisp jargon, the s expression is called a list. For the above example, if we write without line breaks and use commas instead of spaces, the expression looks very much like a list of elements with other tags nested within it.
(task, (name, "test"), (echo, (message, "Hello World!")
Naturally, XML can also be written in this style. Of course, the above sentence is not a list of elements in the general sense. It's actually a tree. This is the same as XML. Call it a list, and I hope you don't get confused, because nested tables and trees are actually the same thing. Lisp literally means table processing (list processing), which can also be called tree processing, which is no different from dealing with XML nodes.
After all this torture, now that we are finally quite close to Lisp, the mysterious nature of Lisp's parentheses (as many Lisp fanatics think) is emerging. Now let's move on to something else.
Re-examine the Macro of C language
At this point, you are probably tired of listening to the discussion of XML, and I am tired of talking about it. Let's stop for a moment and put aside the trees, s expressions, Ant and so on. Let's talk about the preprocessor of C. Someone must have asked, what does our topic have to do with C? We already know a lot about metaprogramming and have discussed code that specializes in writing code. It is difficult to understand this problem, because the relevant discussion articles use programming languages that you are not familiar with. But if you only talk about concepts, it will be relatively simple. I'm sure it will be much easier to understand if we use C as an example to discuss metaprogramming. All right, let's keep looking.
One question is, why use code to write code? How do you do this in actual programming? What exactly does metaprogramming mean? You've probably heard the answers to these questions, but you don't know why. To reveal the truth behind it, let's take a look at a simple database query problem. We have all done this kind of topic. For example, writing SQL statements everywhere in the program code to modify the data in the table (table) is very annoying. Even with the LINQ of Cure 3.0, it still doesn't ease the pain. Writing a complete SQL query (though the syntax is beautiful) to change someone's address or find someone's name is definitely boring for programmers, so how can we solve this problem? The answer is: use the data access layer.
The concept is quite simple, the main point is to abstract the content of data access (at least those trivial parts), use classes to map the tables of the database, and then access the object property accessor (accessor) to indirectly implement the query. This greatly simplifies the development workload. Instead of writing SQL queries, we use the method of accessing objects (or property assignments, depending on the language you choose). Anyone who has used this method knows that it saves time. Of course, if you're going to write such an abstraction layer yourself, it takes a lot of time-you have to write a set of classes to map tables and convert property access into SQL queries, which takes a lot of effort. It is obviously unwise to do it by hand. But once you have a plan and a template, there's really not much to think about. You just need to write similar code according to the same template over and over again. In fact, many people have found a better way to connect to the database, grab the database structure definition (schema), and write code automatically according to predefined or user-customized templates.
If you have used this tool, you will be impressed by its magical effect. Often only a few mouse clicks, you can connect to the database, generate data access source code, and then add the file to your project, ten minutes of work, according to the usual manual way to do, it may take hundreds of hours of manual (man-hours) to complete. However, what if your database structure definition changes later? In that case, all you have to do is repeat the process. There are even some tools that can automate this change. As long as you make it part of the project construction, the database part will be automatically reconstructed each time the project is compiled. This is really great. What you have to do is basically reduced to zero. If the definition of the database structure changes and the code of the data access layer is automatically updated at compile time, any use of outdated old code in the program will cause a compilation error.
The data access layer is a good example, and there are many more. From GUI boilerplate code, WEB code, COM and CORBA stubs, as well as MFC and ATL and so on. In these places, there is a lot of similar code repeated many times. Since it is possible to write this code automatically, and programmer time is much more expensive than CPU time, a lot of tools have been generated to automatically generate boilerplate code. What is the nature of these tools? They are actually the programs of the manufacturing process. They have a mysterious name called metaprogramming. This is the original meaning of metaprogramming.
Metaprogramming can be used in countless places, but it is actually not used that many times. In the final analysis, we are still thinking that if we repeat the code with copy and paste, it will be repeated about 6 and 7 times. Is it worth setting up a special generation tool for this kind of workload? Of course not. The data access layer and COM stubs often need to be reused hundreds or even thousands of times, so tool generation is the best way. And those who only repeat the code a dozen times, there is no need to specialize in tools. If you also develop code generation tools when you don't have to, you obviously overestimate the benefits of code generation. Of course, if creating such tools is simple enough, you should use them as much as possible, because doing so is bound to save time. Now let's see if there is a reasonable way to achieve this goal.
Now, the C preprocessor will come in handy. We have all used the preprocessor of CCompact +, which we use to execute simple compilation instructions to generate simple code transformations (for example, setting debug code switches). Look at an example:
# define triple (X) X+X+X
What is the purpose of this business? This is a simple precompiled instruction that calls the triple (X) substitution in the program X+X+X. For example, replace all triple (5) with 5-5-5, and then give it to the compiler for compilation. This is an example of simple code generation. If the preprocessor of C were more powerful, if we could connect to the database, and if we could have some other simple mechanisms, we could develop our own data access layer inside our program. The following example is a hypothetical extension to the C macro:
# get-db-schema ("127.0.0.1")
# iterate-through-tables
# for-each-table
Class # table-name
{
}
# end-for-each
We connect the database structure definition, iterate through the data tables, and then create a class for each table, which is done in just a few lines of code. In this way, each time the project is compiled, these classes are updated synchronously according to the definition of the database. Obviously, we have built a complete data access layer inside the program without any external tools at all. Of course, one disadvantage of this approach is that we have to learn a new set of compile-time languages, and another disadvantage is that there is no such an advanced version of the C preprocessor. When you need to do complex code generation, the language itself must become quite complex. It must support enough libraries and language structures. For example, the code we want to generate depends on files on some ftp server, and the preprocessor has to support ftp access, and it's a bit disgusting that we have to create and learn a new language just because of this task (in fact, there is already a language with this ability, which is even more ridiculous). We might as well be a little more flexible. Why not just use Cramble + ourselves as our preprocessing language? In this way, we can take advantage of the power of the language, and the new things we need to learn are just a few simple directives that distinguish compile-time code from run-time code.
For (int iTuneso Tinci <; iTunes +)
{
Cout
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.