Also cool to play around with are the various options that let GCC (and presumably LLVM) show you the various compilation stages of your C code. You can even spit out the C and resulting assembly side-by-side.
I haven't had the time to play with this, but Part 4 of the excellent "Unix as IDE" series [1] goes into it and I'm sure there's more around the web.
Another really fun way to get into the underlying assembler that the C compiler generates is Vidar Hokstad's "Writing a compiler in Ruby, bottom up" [2]. This series involves writing little C functions, compiling them to the simplest assembly you can get, then writing a ruby compiler (in Ruby!) that emits that assembly. Some people have objections to the approach, but it's really quite nifty. I especially like it because it's really refreshing to see a compiler tutorial that doesn't start with the lexer and parser.
GDB skills are one of those super useful abilities that you just can't find in most CS graduates. I often spend a few days with new C developers just teaching them how to use GDB to find problems. They are amazed when they find out you can examine variables and set conditional breakpoints.
Just telling them how to start it and use "bt", "up","down","print", and "cont" would be a huge step forward for most developers...
On a related note, it drives me crazy when developers don't know strace/ltrace or the equivalent for their platform. I use strace many times a day to diagnose anything from my own code to figuring out what config files an application actually loads to finding out what's slowing an app down.
This is an area where many college professors fall short. In my intro to CS class we learned C, and the professor explicitly told us to debug our code with printf statements. It wasn't until my first job that I even found out about gdb. I can't believe how much time I wasted trying to debug C and C++ code in college with nothing more than printf and cout.
When students came to me for help, my first two questions were always "What does gdb say?" and "What does valgrind say?" If they couldn't answer, I'd tell them to go find out.
Yeah, GDB and Valgrind are heavily encouraged at the University of Michigan in both introductory and advanced classes. Talking to friends at other universities, it seems like most other places encourage it too.
As a student attending the University of Michigan this fall, I was looking through the introductory programming course syllabus and it mentions only ddd. Is gdb something used more heavily in further courses more specific to those pursuing a concentration in EECS?
ddd is a graphical frontend to gdb and a few other debuggers (like jdb). ddd's UI is straight out of the 90s era of commercial Unices, but I prefer it to gdb since it visualizes source code with breakpoints and the ip and has some nice variable displaying / plotting features (ie it's very easy to visualize a matrix AS a matrix if you're doing some graphics work)
edit: ddd also has a gdb prompt at the bottom, so you really don't lose much by using ddd over gdb unless you can't run an X server
I generally try to refrain from bringing up Emacs in threads that have nothing to do with it (since Emacs does everything) but I was pleasantly surprised by its gdb support and I have used it extensively in the past.
Maybe it's professor specific, but EECS 151, 280 and 281 all stressed using gdb. ddd was mentioned, but most of the debugging examples used gdb. I took these class several years ago though, so it's also possible things have changed since then.
One nice thing about printf debugging is that your eyes and brain never leave the code. There is mental overhead in involving a third entity (in addition to the code and command line) which is the debugger.
Obviously, if the compile/execute loop is cumbersome then GDB will save a ton of time. But when the loop is fast, I find printf-ing to be effective and easy on the brain since you focus 100% on the code.
I don't follow how writing `printf` and watching the output in the console is 100% code when a debugger will stop at `debug(ger);` and show you the surrounding code in context with the output (and more output is accessible in far less time).
What makes it even worse is the lack of a REPL means that between each change you need to recompile. In any large program this is going to take minutes and they add up. A simple GDB invocation is not only a better way of tracking it down, but can potently save hours.
quick changes that don't require a full recompilation generally mean intermediate files are lying around, which means you only need to recompile changed files and relink. this is usually pretty fast until you start changing headers or something like that. (I'm very used to projects with 30 minute plus full compile times taking less than 10 seconds to recompile and relink single-file changes.)
this is not to say that gdb won't save hilarious amounts of time or that you can be a good C coder without understanding a debugger (I don't think you can), just that the huge compile time thing isn't true most of the time.
Well, ccache helps a lot with that. And sometimes it becomes easier to throw an __asm__("int3"); into your code than try to massage gdb into breaking at the exact right spot in your code and only under certain conditions.
I guess it depends on the college. My ECE intro to programming class taught gdb very early, basically right after we learned the basics of C. I thought it was pretty neat but it never really seemed like a huge advantage over printing, especially after I wrote a few logging macros.
gdb being a utility, I believe the student is generally expected to learn it on their own if they like. Much like version control and other things that make the life of a coder easier, but aren't actually part of "Computer Science".
Yes but I don't even remember being told it existed. My intro to C professor made sure we knew that Vi and Emacs existed and gave us enough information to be interested in learning more; why not gdb? My guess is he probably didn't use it.
Yeah we didn't touch any debuggers, I think to an extent they were more just worried about people learning some language basics. The downside of having a first year c/c++ course that didn't just have comp sci students.
The other thing our uni was bad for is that they encouraged us to use vim through the uni servers early on but never have us a proper overview or the resources to really learn it. As a result most saw it as a handicapped text editor that just made things harder.
A course learning C is not a course in debugging or learning any particular debugger. When you are just learning the language, printf is probably the best way to do it.
On the other hand, a follow-up course on actual debugging is probably warranted. Debugging might not be part of the Science bit of Computer Science but it is on the practical end of the Computer bit which you'll probably run into doing the Science bit.
Our version of the course was adapted from the CMU course: http://csapp.cs.cmu.edu/ I thought it was an excellent course. It taught students both concepts and skills that I had picked up in a much more ad-hoc manner.
GDB's not the only debugger out there. I think the real limitation is lack of debug methodology in general. I've discovered numerous features of proprietary debuggers just by thinking "I wish the tool could do X" only to find out that it already could.
If you're on a Mac (pre 10.8, which unfortunately messes things up), I recommend biting the bullet and reinstalling gdb. The process is a bit annoying[0], but tui mode is worth the effort.
If you enjoyed this, then you'll certainly enjoy the programming chapter of 'Hacking: The Art of Exploitation' by Jon Erickson (http://en.wikipedia.org/wiki/Hacking:_The_Art_of_Exploitatio...). The first half of the book is a similar exploration of C programming using GDB to explain everything. Recommended.
The visual debugger in both Eclipse CDT and Visual C++ let you do things like create breakpoints, step through your program, monitor variable values, even create conditional breakpoints that are triggered when a particular lineof code is executed n number of times or when some particular expression involving variables in the local context of the breakpoint turn true.
My question is, what advantages do you get in using gdb directly through the CLI rather than through an IDE? (like Eclipse/NetBeans which itself uses gdb for C/C++ debugging, but has a nice graphical UI for it.)
Flip it around: what advantage does a "nice graphical UI" give you for what is fundamentally a text processing problem. Except for the most basic stuff (starting, stopping, setting a breakpoint at a line, looking at a variable value) everything you want to do in gdb involves something approximating a query language: set a watchpoint on the 32 bit quantity stored at this address; dump memory from this pointer; call this function (e.g. strstr() on a memory region) and tell me the result.
Watching your code run and hitting a "next" button repeatedly isn't really a good use of the tool.
GDB itself has a -tui option which gives you a nice "gui" style interface in the command line. As to why you would not use any sort of gui interface, I don't think the command line alone gives you any advantage as far as I know.
I'm not sure if CDT and Visual C++ can't do that, but I don't think so: gdb can attach to already running processes. Saved me once from a bug in ruby runtime.
You can use conditional breakpoints and watch expressions in gdb, too. It works how you would expect, not dissimilar to the GUI way.
One advantage is that gdb might be available in more environments.
In general, whatever you are more comfortable with is the better tool. I prefer command-line interfaces for most things, but find it easier to set breakpoints by clicking next to a line of code.
While graphical debuggers are great overall, there are times when I prefer to get down to the command line and do my debugging there. And quite surprisingly, I don't lose much efficiency there, either.
But then, this could be the story of most command line utilities: Seems fiddly at first, but actually it is quite usable and often times more convenient than all those whiz-bang graphical tools.
The only thing I like about graphical debuggers is the ease of which you can set up multi-window dashboards. I can have my stack windows a few watched variables and the code all on one screen at the same time as I step through the program
There was a debugger posted here to HN awhile back that actually displayed structs and pointers graphically. What was the name of that project, and is it still around? I was trying to find it the other day. I thought that was a dynamite tool for students.
With LLVM, we should be able to have a REPL for C. as a pedagogical tool.
lldb would be even better for using instead of gdb because lldb actually uses clang's parsing for everything.
I was watching an Apple talk on lldb which explained this in more detail, and it shows a lot of promise for a debugger to have a full C compiler inside of it.
This is pretty neat, though I found myself doing similar things in small test files when I was first learning C and printing the results. C compile times suck, but with like a 10 line program or so to toy with a language semantic, it was practically instant (in 2007). But then again I had never used a dynamic language like Ruby/Python before then so I didn't know better.
More people should be hopping on this bandwagon though because debuggers are awesome. I typically find myself using `po` the most in LLDB (Xcode, iOS development) but it's insanely useful especially when Xcode refuses to show me the values of something I want in the Variables View, ex. NSDictionary keys/values, objects in an NSArray, etc. I'll also use it sometimes to execute simple commands like `[myArrayObject count]` when the Variables View refuses to show me property values. Sometimes Xcode's GUI bits just don't cut it!
Can I ask you what you're comparing with?
I can compare with C++, Scala and Go.
The first two are horrible in that department when compared with C. Go's compiles are blazingly fast.
"Total build time after make clean is about 1min, give or take 10secs. `touch include/linux/version.h` is 6 seconds to rebuild. Just doing a rebuild without touching anything is 2.7secs.
For a defconfig, it does build a godawful amount of modules :^)
You've probably never tried compiling the linux kernel back in the late 90s with a top of the line machine (hint - gcc was slow, it would take over 35 minutes or so to compile from scratch) ;)
It's great for instant feedback but IMHO interactive programming makes it easy (or at least easier) to be lazy. If you get used to coding by trial and error you'll never really understand the language or the problem you're trying to solve, you'll just keep trying things until it works. I know that's not how everyone approaches programming but I've seen it far too often to dismiss it
for the somewhat-graphically-inclined, gdb mode in emacs has the much under-advertized 'M-x gdb-many-windows' which shows your stack, local variables, breakpoints etc. in separate 'windows'.
Very interesting! But I'd say this is too dangerous because of the misleading conclusions that it'd "make" you realize.
Relying on what is printed out of printing a pointer value (which is not what the author is doing) is also misleading. Concluding stuff like "the size of an int is 4" or "size of double is 8" is also misleading. Again, it's not the conclusions the author is realizing, but for someone doing exploratory programming, it may be the case since the point of exploratory programming is learning by seeing how the system responds to the things you're doing.
And maybe I am wrong, but even the author got mislead by it.
"I'm going to ignore why 2147483648 == -2147483648; the point is that even arithmetic can be tricky in C, and gdb understands C arithmetic."
That's actually the result of undefined behavior, and not so much a result or "how C integer arithmetic works".
I really liked the idea. I just think it may be misleading if the tool you're using is GDB.
It'd be interesting a tool which allowed that sort of exploratory programming, but taking into consideration undefined behavior, unspecified things and implementation defined behavior.
The maximum int may be less than 2^31-1. Maybe your C implementation decides that your int type will be a 16bits object with values ranging from -2^15 to 2^15-1. In that case, that integer literal would not be an int, and is likely to be a long, and maybe, in this same implementation, a long is a 61bit object ranging from -2^63 to 2^63-1. In that case, that assertion is just plain false. That's not the system exposed in the article though. But it could happen in some other system.
Does anyone else think debuggers are awesome for learning how programs/languages work? Its pretty much always the first thing I do, even before reading docs, build and debug.
I just got back into C programming after a long absence. Programming with Eclipse CDT is the way to go, at least for starting out. The debugger is great.
Can I do this with a hand-written assembly program, i.e. not necessarily one that has been compiled with as and subject to GNU default optimisations or "constraints"?
The commands you want are "stepi" (single-step one instruction), "disass" (disassemble at the current point in the program), and "info registers" (show you what's in all of the registers). These work equally well for hand-written assembly and for any arbitrary compiled program.
It's been many years since i've done hand-written assembly, so I can't say for sure, but it should be able to do instruction stepping on arbitrary programs.
You can use "layout asm" to see the dissassembly as you step through your program. "layout reg" will then split the view and show you the registers at the same time.
yea, agree with this. emacs integration is very nice. as a side note, if you're particularly aggressive with gdb or the project you're working on is very large it helps to increase the buffer size significantly
i'm sort of in between on this. I really don't like using the graphical debugger in xcode, but it is great when you have to dig around on multiple threads. I used ddd a bit, but always ended up just reverting back to cmdline gdb.
typically though, the performance of the graphical debugger is pretty lousy compared to commandline, and (in the case of xcode) it doesn't do everything. Knowing how to navigate yourself on the commandline is very beneficial, especially when you need to [for lack of better words] rip the shit out of something, inject chunks of memory, or forcibly reproduce bugs that don't happen often.
That being said, I have heard that visual studio is amazing. For some reason I've never developed on Windows platforms so I've never had the opportunity to use the debugger.
I'd be curious to hear other people's opinions/expereinces. I'll freely admit that I use the tools I use because I've grown comfortable with them.
gdb is one of those tools that I regret never learning because I've always had an IDE. I was able to follow the examples here well enough, but the array assignment causes gdb to crash in Cygwin. Does anyone have any suggestions for how to overcome that (short of installing a vm & a real linux)?
That's because sizeof is for the size of the variable (a char pointer), not the size of the string up to the null terminator, which is what strlen's for.
I haven't had the time to play with this, but Part 4 of the excellent "Unix as IDE" series [1] goes into it and I'm sure there's more around the web.
Another really fun way to get into the underlying assembler that the C compiler generates is Vidar Hokstad's "Writing a compiler in Ruby, bottom up" [2]. This series involves writing little C functions, compiling them to the simplest assembly you can get, then writing a ruby compiler (in Ruby!) that emits that assembly. Some people have objections to the approach, but it's really quite nifty. I especially like it because it's really refreshing to see a compiler tutorial that doesn't start with the lexer and parser.
[1] http://blog.sanctum.geek.nz/unix-as-ide-compiling/ [2] http://www.hokstad.com/compiler