• 3 Posts
  • 145 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle

  • Maybe a good idea for a post. But the amount of reaches required makes this icky.

    • Pretending people write:
      let Ok(x) = read_input() else { return Err(Error) };
      
      instead of
       let x = read_input().map_err(|_| ...)?;
      
    • Pretending people write:
       const x: &str = "...";
      
      instead of
       const X: &str = "...";
      
    • Pretending there exist people who have such knowledge of rust macros hygiene, ident namespaces, etc, but somehow don’t know about how macro code expands (the “shock” about the compile error).

    Maybe there is a reason after all why almost no one (maybe no one, period) was ever in that situation.





  • I will let you on a little secret.

    The best “support” you can get is support from upstreams directly (I’m involved in both sides of that equation). But upstreams will often only “support” you when you 1. run the latest stable version 2. the upstream source code wasn’t patched willy-nilly by the packager (your distro).

    So the best desktop linux experience comes with using rolling distro that gives you such packages, with Arch being the most prominent example.

    The acquired knowledge that argues stability and tells you otherwise is a meme.


  • a better solution would be to add a method called something like ulock that does a combined lock and unwrap.

    That’s exactly what’s done above using an extension trait! You can mutex_val.ulock() with it!

    Now that I think about it, I don’t like how unwrap can signal either “I know this can’t fail”, “the possible error states are too rare to care about” or “I can’t be bothered with real error handing right now”.

    That’s why you’re told (clippy does that i think) to use expect instead, so you can signal “whatever string” you want to signal precisely.


    • C++ offers no guaranteed memory safety.
    • A fictional safe C++ that would inevitably break backwards compatibility might as well be called Noel++, because it’s not the same language anymore.
    • If that proposal ever gets implemented (it won’t), neither the promise of guaranteed memory safety will hold up, nor any big C++ project will adopt it. Big projects don’t adopt the (rollingly defined) so-called modern C++ already, and that is something that is a part of the language proper, standardized, and available via multiple implementations.

    would you argue that it’s impossible to write a"hello, world" program in C++

    bent as expected


    This proposal is just a part of a damage control campaign. No (supposedly doable) implementation will ever see the light of day. Ping me when this is proven wrong.





  • I specifically mentioned HTTP/2 because it should have been easy for everyone to both test and find the relevant info.

    But anyway, here is a short explanation, and the curl-library thread where the issue was first encountered.

    You should also find plenty of blog posts where “unexplainable delay”/“unexplainable slowness”/“something is stuck” is in the premise, and then after a lot of story development and “suspense”, the big reveal comes that it was Nagle’s fault.

    As with many things TCP. A technique that may have been useful once, ends up proving to be counterproductive when used with modern protocols, workflows, and networks.









  • but futures only execute when polled.

    The most interesting part here is the polling only has to take place on the scope itself. That was actually what I wanted to check, but got distracted because all spawns are awaited in the scope in moro’s README example.

    async fn slp() {
        tokio::time::sleep(std::time::Duration::from_millis(1)).await
    }
    
    async fn _main() {
        let result_fut = moro::async_scope!(|scope| {
            dbg!("d1");
            scope.spawn(async { 
                dbg!("f1a");
                slp().await;
                slp().await;
                slp().await;
                dbg!("f1b");
            });
            dbg!("d2"); // 11
            scope.spawn(async {
                dbg!("f2a");
                slp().await;
                slp().await;
                dbg!("f2b");
            });
            dbg!("d3"); // 14
            scope.spawn(async {
                dbg!("f3a");
                slp().await;
                dbg!("f3b");
            });
            dbg!("d4");
            async { dbg!("b1"); } // never executes
        });
        slp().await;
        dbg!("o1");
        let _ = result_fut.await;
    }
    
    fn main() {
        let rt = tokio::runtime::Builder::new_multi_thread()
            .enable_all()
            .build()
            .unwrap();
        rt.block_on(_main())
    }
    
    [src/main.rs:32:5] "o1" = "o1"
    [src/main.rs:7:9] "d1" = "d1"
    [src/main.rs:15:9] "d2" = "d2"
    [src/main.rs:22:9] "d3" = "d3"
    [src/main.rs:28:9] "d4" = "d4"
    [src/main.rs:9:13] "f1a" = "f1a"
    [src/main.rs:17:13] "f2a" = "f2a"
    [src/main.rs:24:13] "f3a" = "f3a"
    [src/main.rs:26:13] "f3b" = "f3b"
    [src/main.rs:20:13] "f2b" = "f2b"
    [src/main.rs:13:13] "f1b" = "f1b"
    

    The non-awaited jobs are run concurrently as the moro docs say. But what if we immediately await f2?

    [src/main.rs:32:5] "o1" = "o1"
    [src/main.rs:7:9] "d1" = "d1"
    [src/main.rs:15:9] "d2" = "d2"
    [src/main.rs:9:13] "f1a" = "f1a"
    [src/main.rs:17:13] "f2a" = "f2a"
    [src/main.rs:20:13] "f2b" = "f2b"
    [src/main.rs:22:9] "d3" = "d3"
    [src/main.rs:28:9] "d4" = "d4"
    [src/main.rs:24:13] "f3a" = "f3a"
    [src/main.rs:13:13] "f1b" = "f1b"
    [src/main.rs:26:13] "f3b" = "f3b"
    

    f1 and f2 are run concurrently, f3 is run after f2 finishes, but doesn’t have to wait for f1 to finish, which is maybe obvious, but… (see below).

    So two things here:

    1. Re-using the spawn terminology here irks me for some reason. I don’t know what would be better though. Would defer_to_scope() be confusing if the job is awaited in the scope?
    2. Even if assumed obvious, a note about execution order when there is a mix of awaited and non-awaited jobs is worth adding to the documentation IMHO.