Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
C++ says “We have try. . . finally at home” (devblogs.microsoft.com/oldnewthing)
106 points by ibobev 16 hours ago | hide | past | favorite | 121 comments




The submitted title is missing the salient keyword "finally" that motivates the blog post. The actual subtitle Raymond Chen wrote is: "C++ says “We have try…finally at home.”"

It's a snowclone based on the meme, "Mom, can we get <X>? No, we have <X> at home." : https://www.google.com/search?q=%22we+have+x+at+home%22+meme

In other words, Raymond is saying... "We already have Java feature of 'finally' at home in the C++ refrigerator and it's called 'destructor'"

To continue the meme analogy, the kid's idea of <X> doesn't match mom's idea of <X> and disagrees that they're equivalent. E.g. "Mom, can we order pizza? No, we have leftover casserole in the fridge."

So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".


HN has some heuristics to reduce hyperbole in submissions which occasionally backfire amusingly.

Yeah it's a huge mistake IMO. I see it fucking up titles so frequently, and it flies in the face of the "do not editorialise titles" rule:

    [...] please use the original title, unless it is misleading or linkbait; don't editorialize.
It is much worse, I think, to regularly drastically change the meaning of a title automatically until a moderator happens to notice to change it back, than to allow the occasional somewhat exaggerated original post title.

As it stands, the HN title suggests that Raymond thinks the C++ 'try' keyword is a poor imitation of some other language's 'try'. In reality, the post is about a way to mimic Java's 'finally' in C++, which the original title clearly (if humorously) encapsulates. Raymond's words have been misrepresented here for over 4 hours at this point. I do not understand how this is an acceptable trade-off.


Submissions with titles that undergo this treatment should get a separate screen where both titles are proposed, and the ultimate choice belongs to the submitter.

That would be an excellent solution I think.

Personally, I would rather we have a lower bar for killing submissions quickly with maybe five or ten flags and less automated editorializing of titles.

While I disagree with you that it's "a huge mistake" (I think it works fine in 95% of cases), it strikes me that this sort of semantic textual substitution is a perfect task for an LLM. Why not just ask a cheap LLM to de-sensationalize any post which hits more than 50 points or so?

We saw that a few days ago, someone did that.

You can always contact [email protected] to point out errors of this nature and have it corrected by one of the mods.

A better approach would be to not so aggressively modify headlines.

Relying on somebody to detect the error, email the mods (significant friction), and then hope the mods act (after discussion has already been skewed) is not really a great solution.


It has been up with the incorrect title for over 7 hours now. That's most of the Hacker News front-page lifecycle. The system for correcting bad automatic editorialisation clearly isn't working well enough.

Oh, come on man! These are trivial bugs. Whoever noticed it first should have sent the email to the mods. I did it before i posted my previous comment and i now see that the title has been changed appropriately.

it's not a trivial bug, it creates the same sort of aversive reaction that obvious AI slop banner images do.

7. hours.

Presumably nobody informed the mods (before i did) and it was very early in the morning in the US (assuming mods are based in the US). That would explain the delay.

Anyway, going forward, if anything like this happens again folks should simply shoot an email immediately to the mods and if the topic is really interesting deserving of more discussion they can always request the mods to keep the post on the frontpage for a longer time period via second-chance pool etc.

It just takes a minute or two of one's time and hence not worth getting het up over.


It would be easier for everyone involved, and not depend on mods being awake, if HN didn't just automatically drastically change the meaning of headlines.

Again, this post was misrepresenting Raymond's words for over 7 hours. That's most of its time on the front page. The current system doesn't work.


It's rare to see the mangling heuristics improve a title these days. There was a specific type of clickbait title that was overused at the time, so a rule was created. And now that the original problem has passed, we're stuck with it.

I intentionally shortened the title because there is a length limit. Perhaps I didn't do it the right way because I was unfamiliar with the mentioned meme. Sorry about that.

It's important even without the meme. c++ has try-catch but not try-finally.

It is common for some titles to exceed the allowed length limit on HN. I often do not have enough time to contemplate the best way to shorten them.

You have a few minutes to change the title after the submission, I do it all the time.

That's why you shouldn't use memes in the titles of technical articles. The intelligibility of your intent is vastly reduced.

The title of the blog post is perfectly intelligible. It becomes unintelligible when you remove random words from it.

It didn’t make sense to me, either, and I’m a native English speaker. The cultural reference was lost on me.

The requirement to avoid any cultural reference is a bit strict

It still didn't make sense to me with all the words. The way it finally made sense was seeing the formatting is "We have `try...finally` at home".

I'm curious about the actual origin now, given that a quick search shows only vague references or claim it is recent, but this meme is present in Eddie Murphys "Raw" from 1987, so it is at least that old.

Sounds like a perfect fit for some Deep Research.

Edit: A deep research run by Gemini 3.0 Pro says the origin is likely to be stand-up comedy routines between 1983–1987 and particularly mentions Eddie Murphy, and the 1983 socioeconomic precursor "You ain't got no McDonald's money" in Delirious (1983) culminating in the meme from in Raw (1987). So Eddie might very well be the original origin.


> So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".

Those figurative kids would be stuck in a mental model where they try to shoehorn their ${LanguageA} idioms onto applications written in ${LanguageB}. As the article says, C++ has destructors since the "C with Classes" days. Complaining that you might need to write a class is specious reasoning because if you have a resource worth managing, you already use RAII to manage it. And RAII is one of the most fundamental and defining features of C++.

It all boils down to whether one knows what they are doing, or even bothers to know what they are doing.


Ok, but sometimes you just need a single line in a finally and writing a class is more annoying

> Ok, but sometimes you just need a single line in a finally and writing a class is more annoying

I don't think you understand.

If you need to run cleanup code whenever you need to destroy a resource, there is already a special member function designed to handle that: the destructor. Read up on RAII.

It somehow you failed to understand RAII and basic resource management, you can still use one-liners. Read up on scope guard.

If you are too lazy to learn about RAII and too lazy to implement a basic scope guard, you can use one of the many scope guard implementations around. Even Boost has those.

https://www.boost.org/doc/libs/latest/libs/scope/doc/html/sc...

So, unless you are lazy and want to keep mindlessly writing Java in ${LANGUAGE} regardless it makes sense or not, there is absolutely no reason at all to use finally in C++.


Slightly more than that: If you need to run cleanup code, whatever needs cleaned up should be a class and do the cleanup in the destructor.

Take a file handle, for instance. Don't use open() or fopen() and then try to close it in a finally. Instead, use a file class and let it close itself by going out of scope.


Destructors are vastly superior to the finally keyword because they only require us to remember a single time to release resources (in the destructor) as opposed to every finally clause. For example, a file always closes itself when it goes out of scope instead of having to be explicitly closed by the person who opened the file. Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks. Not to mention how branching and conditional initialization complicates things. You can often pair up constructors with destructors in the code so that it becomes very obvious when resource acquisition and release do not match up.

I couldn't agree more. And in the rare cases where destructors do need to be created inline, it's not hard to combine destructors with closures into library types.

To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.

[1]: https://doc.rust-lang.org/nightly/std/mem/struct.DropGuard.h...


A writable file closing itself when it goes out of scope is usually not great, since errors can occur when closing the file, especially when using networked file systems.

https://github.com/isocpp/CppCoreGuidelines/issues/2203


You need to close it and check for errors as part of the happy path. But it's great that in the error path (be that using an early return or throwing an exception), you can just forget about the file and you will never leak a file descriptor.

You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.


Any fallible cleanup function is awkward, regardless of error handling mechanism.

Java solved it by having exceptions be able to attach secondary exceptions, in particular those occurring during stack unwinding (via try-with-resources).

The result is an exception tree that reflects the failures that occurred in the call tree following the first exception.


The entire point of the article is that you cannot throw from a destructor. Now how do you signal that closing/writing the file in the destructor failed?

You are allowed to throw from a destructor as long as there's not already an active exception unwinding the stack. In my experience this is a total non-issue for any real-world scenario. Propagating errors from the happy path matters more than situations where you're already dealing with a live exception.

For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?

If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.


Just panic. What's the caller realistically going to do with that information?

That tastes like leftover casserole instead of pizza.

> The entire point of the article is that you cannot throw from a destructor.

You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.


So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors

> So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors

It's weird how you tried to frame a core design feature of the most successful programming language in the history of mankind as "useless".

Perhaps the explanation lies in how you tried to claim that exceptions had any place in "communicating non-fatal errors", not to mention that your scenario, handling non-fatal errors when destroying a resource, is fundamentally meaningless.

Perhaps you should take a step back and think whether it makes sense to extrapolate your mental models to languages you're not familiar with.


Destructors and finally clauses serve different purposes IMO. Most of the languages that have finally clauses also have destructors.

> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.

I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)


> Most of the languages that have finally clauses also have destructors.

Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.

Which languages am I missing which have both try..finally and destructors?


In C# the closest analogue to a C++ destructor would probably be a `using` block. You’d have to remember to write `using` in front of it, but there are static analysers for this. It gets translated to a `try`–`finally` block under the hood, which calls `Dispose` in `finally`.

    using (var foo = new Foo())
    {
    }
    // foo.Dispose() gets called here, even if there is an exception
Or, to avoid nesting:

    using var foo = new Foo(); // same but scoped to closest current scope
These also is `await using` in case the cleanup is async (`await foo.DisposeAsync()`)

I think Java has something similar called try with resources.


Java's is

    try (var foo = new Foo()) {
    }
    // foo.close() is called here.
I like the Java method for things like files because if the there's an exception during the close of a file, the regular `IOException` block handles that error the same as it handles a read or write error.

What do you do if you wanna return the file (or an object containing the file) in the happy path but close it in the error path?

You'd write it like this

    void bar() {
      try (var f = foo()) {
        doMoreHappyPath(f);
      }
      catch(IOException ex) {
        handleErrors();
      }
    }

    File foo() throws IOException {
      File f = openFile();
      doHappyPath(f);
      if (badThing) {
        throw new IOException("Bad thing");
      }
      return f;
    }
That said, I think this is a bad practice (IMO). Generally speaking I think the opening and closing of a resource should happen at the same scope.

Making it non-local is a recipe for an accident.

*EDIT* I've made a mistake while writing this, but I'll leave it up there because it demonstrates my point. The file is left open if a bad thing happens.


In Java, I agree with you that the opening and closing of a resource should happen at the same scope. This is a reasonable rule in Java, and not following it in Java is a recipe for errors because Java isn't RAII.

In C++ and Rust, that rule doesn't make sense. You can't make the mistake of forgetting to close the file.

That's why I say that Java, Python and C#'s context managers aren't remotely the same. They're useful tools for resource management in their respective languages, just like defer is a useful tool for resource management in Go. They aren't "basically RAII".


> You can't make the mistake of forgetting to close the file.

But you can make a few mistakes that can be hard to see. For example, if you put a mutex in an object you can accidentally hold it open for longer than you expect since you've now bound the life of the mutex to the life of the object you attached it to. Or you can hold a connection to a DB or a file open for longer than you expected by merely leaking out the file handle and not promptly closing it when you are finished with it.

Trying to keep resource open and close in the same scope is an ownership thing. Even for C++ or Rust, I'd consider it not great to leak out RAII resources from out of the scope that acquired them. When you spread that sort of ownership throughout the code it becomes hard to conceptualize what the state of a program would be at any given location.

The exception is memory.


That approach doesn't allow you to move the file into some long lived object or return it in the happy path though, does it?

You can move the burden of disposing to the caller (return the disposable object and let the caller put it in a using statement).

In addition, if the caller itself is a long-lived object it can remember the object and implement dispose itself by delegating. Then the user of the long-lived object can manage it.


> You can move the burden of disposing to the caller (return the disposable object and let the caller put it in a using statement).

That doesn't help. Not if the function that wants to return the disposable object in the happy path also wants to destroy the disposable object in the error path.


You have to write a disposable wrapper to return. Return it in error case too.

    readonly record struct Result<TResult, TDisposable>(TResult? IfHappy, TDisposable? Disposable): IDisposable where TDisposable : IDisposable
    {
        public void Dispose() => Disposable?.Dispose();
    }

Usage at call site:

    using (var result = foo.GetSomethingIfLucky())
    {
        if (result.IfHappy is {} success)
        {
            // do something
        }
    }

As someone coming from RAII to C#, you get used to it, I'd say. You "just" have to think differently. Lean into records and immutable objects whenever you can and IDisposable interface ("using") when you can't. It's not perfect but neither is RAII. I'm on a learning path but I'd say I'm more productive in C# than I ever was in C++.

I agree with this. I don't dislike non-RAII languages (even though I do prefer RAII). I was mostly asking a rhetorical question to point out that it really isn't the same at all. As you say, it's not a RAII language, and you have to think differently than when using a RAII language with proper destructors.

Pondering - is there a language similar to C++ (whatever that means, it's huge, but I guess a sprinkle of don't pay for what you don't use and being compiled) which has no raw pointers and such (sacrificing C compatibility) but which is otherwise pretty similar to C++?

Technically CPython has deterministic destructors, __del__ always gets called immediately when ref count goes to zero, but it's just an implementation detail, not a language spec thing.

I don't view finalizers and destructors as different concepts. The notion only matters if you actually need cleanup behavior to be deterministic rather than just eventual, or you are dealing with something like thread locals. (Historically, C# even simply called them destructors.)

There's a huge difference in programming model. You can rely on C++ or Rust destructors to free GPU memory, close sockets, free memory owned through an opaque pointer obtained through FFI, implement reference counting, etc.

I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.


For GCed languages, I think finalizers are a mistake. They only serve to make it harder to reason about the code while masking problems. They also have negative impacts on GC performance.

Java is actively removing it's finalizers.


Sometimes „eventually“ is „At the end of the process“. For many resources this is not acceptable.

> I don't view finalizers and destructors as different concepts.

They are fundamentally different concepts.

See Destructors, Finalizers, and Synchronization by Hans Boehm - https://dl.acm.org/doi/10.1145/604131.604153


It would suffice to say I don't always agree with even some of the best in the field, and they don't always agree with each other, either. Anders Hejlsberg isn't exactly a random n00b when it comes to programming language design and still called the C# equivalent a "destructor", though it is now known as a finalizer in line with other programming languages. They are things that clean up resources at the end of the life of an object; the difference between GC'd languages and RAII languages is that in a GC'd runtime the lifespan of an object is non-deterministic. That may very well change the programming model, as it does in many other ways, but it doesn't make the two concepts "fundamentally different" by any means. They're certainly related concepts...

But they're addressing different problems

Sure destructors are great but you still want a "finally" for stuff you can't do in a destructor


Python has that too, it's called a context manager, basically the same thing as C++ RAII.

You can argue that RAII is more elegant, because it doesn't add one mandatory indentation level.


How do you return a file in the happy path when using a context manager?

If you can't, it's not remotely "basically the same as C++ RAII".


It's not the same thing at all because you have to remember to use the context manager, while in C++ the user doesn't need to write any extra code to use the destructor, it just happens automatically.

I always wonder whether C++ syntax ever becomes readable when you sink more time into it, and if so - how much brain rewiring we would observe on a functional MRI.

It does... until you switch employers. Or sometimes even just read a coworker's code. Or even your own older code. Actually no, I don't think anyone achieved full readability enlightenment. People like me just hallucinated it after doing the same things for too long.

Sadly, that is exactly my experience.

And yet, somehow Lisp continues to be everyone's sweetheart, even though creating literal new DSLs for every project is one of the features of the language.

Lisp doesnt have much syntax to speak of. All of the DSLs use the same basic structure and are easy to read.

Cpp has A LOT A of syntax: init rules, consts, references, move, copy, templates, special cases, etc. It also includes most of C, which is small but has so many basic language design mistakes that "C puzzles" is a book.


The syntax and the concepts (const, move, copy, etc) are orthogonal. You could possibly write a lisp / s-exp syntax for c++ and all it would make better would be the macros in the preprocessor. The DSL doesn't have to be hard to read if it uses unfamiliar/uncommon project specific concepts.

Yes, sure.

What i mean is that in cpp all the numerous language features are exposed through little syntax/grammar details. Whereas in Lisps syntax and grammar are primitive, and this is why macros work so well.


I continue to believe Lisp is perfect, despite only using it in a CS class a decade ago. Come to think of it, it might just be that Lisp is a perfect DSL for (among other things) CS classes…

It's because DSLs there reduce cognitive load for the reader rather than add up to it.

Well-designed abstractions do that in every language. And badly designed ones do the opposite, again in all languages. There's nothing special about Lisp here

Sure but it's you who singled out Lisp here. The whole point of DSL is designing a purpose formalism that makes a particular problem easy to reason about. That's hardly a parallel to ever-growing vocabulary of standard C++.

In my opinion, C++ syntax is pretty readable. Of course there are codebases that are difficult to read (heavily abstracted, templated codebases especially), but it's not really that different compared to most other languages. But this exists in most languages, even C can be as bad with use of macros.

By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.


Scala is a meta language. It's really a language construction toolkit in a box.

It does get easy to read, but then you unlock a deeper level of misery which is trying to work out the semantics. Stuff like implicit type conversions, remembering the rule of 3 or 5 to avoid your std::moves secretly becoming a copy, unwittingly breaking code because you added a template specialization that matches more than you realized, and a million others.

"using namespace std;" goes a long way to make C++ more readable and I don't really care about the potential issues. But yeah, due to a lack of a nice module system, this will quickly cause problems with headers that unload everything into the global namespace, like the windows API.

I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.


Standard library modules: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p24...

I have thoroughly forgotten which header std::ranges::iota comes from. I don't care either.


Last time I tried, modules were a spuriously supported mess. I'll give them another try once they have ironclad support in cmake, gcc, clang and Visual Studio.

It's very readable, especially compared to Rust.

I love how the haters of Rust's syntax can be roughly divided into two groups:

(1) Why doesn't it look like C++?

(2) Why does it look so much like C++?


This is just a low-effort comment.

> whether C++ syntax ever becomes readable when you sink more time into it,

Yes, and the easy approach is to learn as you need/go.


I like how Swift solved this: there's a more universal `defer { ... }` block that's executed at the end of a given scope no matter what, and after the `return` statement is evaluated if it's a function scope. As such it has multiple uses, not just for `try ... finally`.

I think Swift’s defer (https://docs.swift.org/swift-book/documentation/the-swift-pr...) was inspired by/copied from go (https://go.dev/tour/flowcontrol/12), but they may have taken it from an even earlier language that I’m not aware of.

Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.

Secondly, if you write

       foo
       defer revert_foo
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.

A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.


The oldest defer-like feature I can find reference to is the ON_BLOCK_EXIT macro from this article in the December 2000 issue of the C/C++ Users Journal:

https://jacobfilipp.com/DrDobbs/articles/CUJ/2000/cexp1812/a...

A similar macro later (2006) made its way into Boost as BOOST_SCOPE_EXIT:

https://www.boost.org/doc/libs/latest/libs/scope_exit/doc/ht...

I can't say for sure whether Go's creators took inspiration from these, but it wouldn't be surprising if they did.


I'll disagree here. I'd much rather have a Python-style context manager, even if it introduces a level of indentation, rather than have the sort of munged-up control flow that `defer` introduces.

I can see your point, but that (https://book.pythontips.com/en/latest/context_managers.html) requires the object you’re using to implement __enter__ and __exit__ (or, in C#, implement IDisposable (https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...), in Java, implement AutoCloseable (https://docs.oracle.com/javase/tutorial/essential/exceptions...); there likely are other languages providing something similar).

Defer is more flexible/requires less boilerplate to add callsite specific handling. For an example, see https://news.ycombinator.com/item?id=46410610


Yeah, it's especially handy in UI code where you can have asynchronous operations but want to have a clear start/end indication in the UI:

    busy = true
    Task {
        defer { busy = false }
        // do async stuff, possibly throwing exceptions and whatnot
    }

I was contemplating what it would look like to provide this with a macro in Rust, and of course someone has already done it. It's syntactic sugar for the destructor/RAII approach.

https://docs.rs/defer-rs/latest/defer_rs/


I don't know Rust but, can this `defer` evaluate after the `return` statement is evaluated like in Swift? Because in Swift you can do this:

    func atomic_get_and_inc() -> Int {
        sem.wait()
        defer {
            value += 1
            sem.signal()
        }
        return value
    }

It's easy to demonstrate that destructors run after evaluating `return` in Rust:

    struct PrintOnDrop;
    
    impl Drop for PrintOnDrop {
        fn drop(&mut self) {
            println!("dropped");
        }
    }
    
    fn main() {
        let p = PrintOnDrop;
        return println!("returning");
    }
But the idea of altering the return value of a function from within a `defer` block after a `return` is evaluated is zany. Please never do that, in any language.

EDIT: I don’t think you can actually put a return in a defer, I may have misremembered, it’s been several years. Disregard this comment chain.

It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:

    func getInt() -> Int {
        let i: Int // declared but not
                   // defined yet!

        defer { return i }

        // all code paths must define i
        // exactly once, or it’s a compiler
        // error
        if foo() {
            i = 0
        } else {
            i = 1
        }

        doOtherStuff()
    }

This control flow is wacky. Please never do this.

Huh, I didn't know about `return` in `defer`, but is it really useful?

No, I actually misremembered… you can’t return in a defer.

The magical thing I was misremembering is that you can reference a not-yet-defined value in a defer, so long as all code paths define it once:

  fn callFoo() -> FooResult {
    let fooParam: Int // declared, not defined yet
    defer {
      // fooParam must get defined by the end of the function
      foo(fooParam)
      otherStuffAfterFoo() // …
    }

    // all code paths must assign fooParam
    if cond {
      fooParam = 0
    } else {
      fooParam = 1
      return // early return!
    }

    doOtherStuff()
  }
Blame it on it being years since I’ve coded in swift, my memory is fuzzy.

    #include <iostream>
    #define RemParens_(VA) RemParens__(VA)
    #define RemParens__(VA) RemParens___ VA
    #define RemParens___(...) __VA_ARGS__
    #define DoConcat_(A,B) DoConcat__(A,B)
    #define DoConcat__(A,B) A##B
    #define defer(BODY) struct DoConcat_(Defer,__LINE__) { ~DoConcat_(Defer,__LINE__)() { RemParens_(BODY) } } DoConcat_(_deferrer,__LINE__)

    int main() {
        {
            defer(( std::cout << "Hello World" << std::endl; ));
            std::cout << "This goes first" << std::endl;
        }
    }

Why would that be preferable to just using an RAII style scope_exit with a lambda

Meh, I was going to use the preprocessor for __LINE__ anyways (to avoid requiring a variable name) so I just made it an "old school lambda." Besides, scope_exit is in C++23 which is still opt-in in most cases.

And here I thought we were trying to finally kill off pre-processor macros.

"We have syntax macros at home"

Calling arbitrary callbacks from a destructor is a bad idea. Sooner or later someone will violate the requirement about exceptions, and your program will be terminated immediately. So I'd only use this pattern in -fno-exceptions projects.

In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.


This is a good “how C++ does it” explanation, but I think it’s more accurate to say destructors implement finally-style cleanup in C++, not that they are finally. finally is about operation-scoped cleanup; destructors are about ownership. C++ just happens to use the same tool for both.

> In Java, Python, JavaScript, and C# an exception thrown from a finally block overwrites the original exception, and the original exception is lost.

Pet peeve of mine: all these languages got it wrong. (And C++ got it extra-wrong.)

The error you want to log or report to the user is almost certainly the original exception, not the one from the finally block. The error from the finally block is probably a side effect of the original exception. Reporting the finally exception obscures information about the root cause, making it harder to debug the problem.

Many of these languages do attach the original exception to the new exception in some way, so you can get at it if you need to, but whatever actually catches and logs the exception later has to go out of its way to make sure to log the root cause rather than some stupid side effect. The hierarchy should be reversed: the exception thrown by `finally` should be added as an attachment to the original exception, perhaps placed in a list of "secondary" errors. Or you could even just throw it away, honestly the original exception is almost always all you care about anyway.

(C++ of course did much worse by just crashing in this scenario. I imagine this to be the outcome of some debate in the committee where they couldn't decide which exception should take priority. And now everyone has internalized this terrible decision by saying "well, destructors shouldn't throw" without seeming to understand that this is equivalent to saying "destructors shouldn't have bugs". WELL OF COURSE THEY SHOULDN'T BUT GOOD LUCK WITH THAT.)


This part is not correct. I can't speak for the other languages, but in Python the exception that is originally thrown is the one that creates the traceback. If the finally block also throws an exception, then the traceback includes that as additional information. The author includes an addendum, yet he is still wrong about which exception is first raised.

> Update: Adam Rosenfield points out that Python 3.2 now saves...

how old is this post that 3.2 is "now"?


I think the author didn't check the age of Python 3.2 when adding the update

In other words: Footgun #17421 Exhibit A.

What the blog doesn't mention is how try finally can mess up your control flow.

In Java the following is perfectly valid:

try { throw new IllegalStateException("Critical error"); } finally { return "Move along, nothing to see here"; }


Yes, Java has footguns too.

The existence of two different patterns each with their own pitfalls is why we can’t have nice things. Finally shouldn’t return a value. Simply a void expression. Exception driven API’s need to be snuffed out.

If your method throws, mark it as such as force me to handle the exception if it does, do not return a non-value value in a finally.

Using Java as the example shows just how far we have come with this thinking, why old school Java style exception handling sucks and why C++ by proxy does too.

It’s difficult to break old mental habits but it’s easier when the compiler yells at you for doing bad things.


I'm quite sure that an exception thrown in finally block in java will have the original as suppressed, not discarded

Who needs finally when we have goto?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: