Day 17: Clumsy Crucible
Megathread guidelines
- Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
- You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL
FAQ
- What is this?: Here is a post with a large amount of details: https://programming.dev/post/6637268
- Where do I participate?: https://adventofcode.com/
- Is there a leaderboard for the community?: We have a programming.dev leaderboard with the info on how to join in this post: https://programming.dev/post/6631465
Scala3
Learning about scala-graph yesterday seems to have paid off already. This explicitly constructs the entire graph of allowed moves, and then uses a naive dijkstra run. This works, and I don’t have to write a lot of code, but it is fairly inefficient.
import day10._ import day10.Dir._ import day11.Grid // standing on cell p, having entered from d case class Node(p: Pos, d: Dir) def connect(p: Pos, d: Dir, g: Grid[Int], dists: Range) = val from = Seq(-1, 1).map(i => Dir.from(d.n + i)).map(Node(p, _)) val ends = List.iterate(p, dists.last + 1)(walk(_, d)).filter(g.inBounds) val costs = ends.drop(1).scanLeft(0)(_ + g(_)) from.flatMap(f => ends.zip(costs).drop(dists.start).map((dest, c) => WDiEdge(f, Node(dest, d), c))) def parseGrid(a: List[List[Char]], dists: Range) = val g = Grid(a.map(_.map(_.getNumericValue))) Graph() ++ g.indices.flatMap(p => Dir.all.flatMap(d => connect(p, d, g, dists))) def compute(a: List[String], dists: Range): Long = val g = parseGrid(a.map(_.toList), dists) val source = Node(Pos(-1, -1), Right) val sink = Node(Pos(-2, -2), Right) val start = Seq(Down, Right).map(d => Node(Pos(0, 0), d)).map(WDiEdge(source, _, 0)) val end = Seq(Down, Right).map(d => Node(Pos(a(0).size - 1, a.size - 1), d)).map(WDiEdge(_, sink, 0)) val g2 = g ++ start ++ end g2.get(source).shortestPathTo(g2.get(sink)).map(_.weight).getOrElse(-1.0).toLong def task1(a: List[String]): Long = compute(a, 1 to 3) def task2(a: List[String]): Long = compute(a, 4 to 10)
Rust: https://codeberg.org/Sekoia/adventofcode/src/branch/main/src/y2023/day17.rs
WOW that took me forever. Completely forgot how to dijkstra’s and then struggled with the actual puzzle too. 1h20 total, but I was still top 1500, which I guess means most people really struggled with this, huh.
Yeah, finding a good way to represent the “last three moves” constraint was a really interesting twist. You beat me to it, anyway!
Python
749 line-seconds
import collections import dataclasses import heapq import numpy as np from .solver import Solver @dataclasses.dataclass(order=True) class QueueEntry: price: int x: int y: int momentum_x: int momentum_y: int deleted: bool class Day17(Solver): lines: list[str] sx: int sy: int lower_bounds: np.ndarray def __init__(self): super().__init__(17) def presolve(self, input: str): self.lines = input.splitlines() self.sx = len(self.lines[0]) self.sy = len(self.lines) start = (self.sx - 1, self.sy - 1) self.lower_bounds = np.zeros((self.sx, self.sy)) + np.inf self.lower_bounds[start] = 0 queue: list[QueueEntry] = [QueueEntry(0, self.sx - 1, self.sy - 1, 0, 0, False)] queue_entries: dict[tuple[int, int], QueueEntry] = {start: queue[0]} while queue: cur_price, x, y, _, _, deleted = dataclasses.astuple(heapq.heappop(queue)) if deleted: continue del queue_entries[(x, y)] self.lower_bounds[x, y] = cur_price price = cur_price + int(self.lines[y][x]) for dx, dy in ((-1, 0), (1, 0), (0, -1), (0, 1)): nx, ny = x + dx, y + dy if not (0 <= nx < self.sx) or not (0 <= ny < self.sy): continue if price < self.lower_bounds[nx, ny]: self.lower_bounds[nx, ny] = price if (nx, ny) in queue_entries: queue_entries[(nx, ny)].deleted = True queue_entries[(nx, ny)] = QueueEntry(price, nx, ny, 0, 0, False) heapq.heappush(queue, queue_entries[(nx, ny)]) def _solve(self, maximum_run: int, minimum_run_to_turn: int): came_from: dict[tuple[int, int, int, int], tuple[int, int, int, int]] = {} start = (0, 0, 0, 0) queue: list[QueueEntry] = [QueueEntry(self.lower_bounds[0, 0], *start, False)] queue_entries: dict[tuple[int, int, int, int], QueueEntry] = {start: queue[0]} route: list[tuple[int, int]] = [] prices: dict[tuple[int, int, int, int], float] = collections.defaultdict(lambda: np.inf) prices[start] = 0 while queue: _, current_x, current_y, momentum_x, momentum_y, deleted = dataclasses.astuple(heapq.heappop(queue)) cur_price = prices[(current_x, current_y, momentum_x, momentum_y)] if deleted: continue if ((current_x, current_y) == (self.sx - 1, self.sy - 1) and (momentum_x >= minimum_run_to_turn or momentum_y >= minimum_run_to_turn)): previous = came_from.get((current_x, current_y, momentum_x, momentum_y)) route.append((current_x, current_y)) while previous: x, y, *_ = previous if x != 0 or y != 0: route.append((x, y)) previous = came_from.get(previous) break for dx, dy in ((-1, 0), (1, 0), (0, -1), (0, 1)): dot_product = dx * momentum_x + dy * momentum_y if dot_product < 0 or dot_product >= maximum_run: continue if ((momentum_x or momentum_y) and dot_product == 0 and abs(momentum_x) < minimum_run_to_turn and abs(momentum_y) < minimum_run_to_turn): continue new_x, new_y = current_x + dx, current_y + dy if not (0 <= new_x < self.sx) or not (0 <= new_y < self.sy): continue new_momentum_x, new_momentum_y = (dx, dy) if dot_product == 0 else (momentum_x + dx, momentum_y + dy) new_position = (new_x, new_y, new_momentum_x, new_momentum_y) potential_new_price = cur_price + int(self.lines[new_y][new_x]) if potential_new_price < prices[new_position]: queue_entry = queue_entries.get(new_position) if queue_entry: queue_entry.deleted = True queue_entries[new_position] = QueueEntry(potential_new_price + self.lower_bounds[new_x, new_y], *new_position, False) came_from[new_position] = (current_x, current_y, momentum_x, momentum_y) prices[new_position] = potential_new_price heapq.heappush(queue, queue_entries[new_position]) return sum(int(self.lines[y][x]) for x, y in route) def solve_first_star(self) -> int: return self._solve(3, 0) def solve_second_star(self) -> int: return self._solve(10, 4)
Haskell
Wowee, I took some wrong turns solving today’s puzzle! After fixing some really inefficient pruning I ended up with a Dijkstra search that runs in 2.971s (for a less-than-impressive 124.782 l-s).
Solution
import Control.Monad import Data.Array.Unboxed (UArray) import qualified Data.Array.Unboxed as Array import Data.Char import qualified Data.HashSet as Set import qualified Data.PQueue.Prio.Min as PQ readInput :: String -> UArray (Int, Int) Int readInput s = let rows = lines s in Array.amap digitToInt . Array.listArray ((1, 1), (length rows, length $ head rows)) $ concat rows walk :: (Int, Int) -> UArray (Int, Int) Int -> Int walk (minStraight, maxStraight) grid = go Set.empty initPaths where initPaths = PQ.fromList [(0, ((1, 1), (d, 0))) | d <- [(0, 1), (1, 0)]] goal = snd $ Array.bounds grid go done paths = case PQ.minViewWithKey paths of Nothing -> error "no route" Just ((n, (p@(y, x), hist@((dy, dx), k))), rest) | p == goal && k >= minStraight -> n | (p, hist) `Set.member` done -> go done rest | otherwise -> let next = do h'@((dy', dx'), _) <- join [ guard (k >= minStraight) >> [((dx, dy), 1), ((-dx, -dy), 1)], guard (k < maxStraight) >> [((dy, dx), k + 1)] ] let p' = (y + dy', x + dx') guard $ Array.inRange (Array.bounds grid) p' return (n + grid Array.! p', (p', h')) in go (Set.insert (p, hist) done) $ (PQ.union rest . PQ.fromList) next main = do input <- readInput <$> readFile "input17" print $ walk (0, 3) input print $ walk (4, 10) input
(edited for readability)
Haskell
import Data.Array.Unboxed import qualified Data.ByteString.Char8 as BS import Data.Char (digitToInt) import Data.Heap hiding (filter) import qualified Data.Heap as H import Relude type Pos = (Int, Int) type Grid = UArray Pos Int data Dir = U | D | L | R deriving (Eq, Ord, Show, Enum, Bounded, Ix) parse :: ByteString -> Maybe Grid parse input = do let l = fmap (fmap digitToInt . BS.unpack) . BS.lines $ input h = length l w <- fmap length . viaNonEmpty head $ l pure . listArray ((0, 0), (w - 1, h - 1)) . concat $ l move :: Dir -> Pos -> Pos move U = first pred move D = first succ move L = second pred move R = second succ nextDir :: Dir -> [Dir] nextDir U = [L, R] nextDir D = [L, R] nextDir L = [U, D] nextDir R = [U, D] -- position, previous direction, accumulated loss type S = (Int, Pos, Dir) doMove :: Grid -> Dir -> S -> Maybe S doMove g d (c, p, _) = do let p' = move d p guard $ inRange (bounds g) p' pure (c + g ! p', p', d) doMoveN :: Grid -> Dir -> Int -> S -> Maybe S doMoveN g d n = foldl' (>=>) pure . replicate n $ doMove g d doMoves :: Grid -> [Int] -> S -> Dir -> [S] doMoves g r s d = mapMaybe (flip (doMoveN g d) s) r allMoves :: Grid -> [Int] -> S -> [S] allMoves g r s@(_, _, prev) = nextDir prev >>= doMoves g r s solve' :: Grid -> [Int] -> UArray (Pos, Dir) Int -> Pos -> MinHeap S -> Maybe Int solve' g r distances target h = do ((acc, pos, dir), h') <- H.view h if pos == target then pure acc else do let moves = allMoves g r (acc, pos, dir) moves' = filter (\(acc, p, d) -> acc < distances ! (p, d)) moves distances' = distances // fmap (\(acc, p, d) -> ((p, d), acc)) moves' h'' = foldl' (flip H.insert) h' moves' solve' g r distances' target h'' solve :: Grid -> [Int] -> Maybe Int solve g r = solve' g r (emptyGrid ((lo, minBound), (hi, maxBound))) hi (H.singleton (0, (0, 0), U)) where (lo, hi) = bounds g emptyGrid = flip listArray (repeat maxBound) part1, part2 :: Grid -> Maybe Int part1 = (`solve` [1 .. 3]) part2 = (`solve` [4 .. 10])
C
Very not pretty and not efficient. Using what I think is dynamic programming - essentially I just propagate cost total through a (x, y, heading, steps in heading) state space, so every cell in that N-dimensional array holds the minimum total cost known to get to that state which is updated iteratively from the neighbours until it settles down.
Debugging was annoying because terminals aren’t great at showing 4D grids. My mistakes were in the initial situation and missing the “4 steps to come to a stop at the end” aspect in part 2.
Nim
Another tough one. Judging by the relative lack of comments here, I wasn’t the only one that had trouble. For me this one was less frustrating and more interesting than day 12, though.
I solved part 1 by doing a recursive depth-first search, biasing towards a zigzag path directly to the goal in order to establish a baseline path cost. Path branches that got more expensive than the current best path terminated early. I also stored direction, speed, and heat loss data for each tile entered. Any path branch that entered a tile in the same direction and at the same (or greater) speed as a previous path was terminated, unless it had a lower temperature loss.
This ran pretty slowly, taking around an hour to finish. I took a break and just let it run. Once it completed, it had gotten pretty late, so I did a quick naive modification for part 2 to account for the new movement restrictions, and let that run overnight. The next day it was still running, so I spent some time trying to think of a way to speed it up. Didn’t really get anywhere on my own, so I started reading up on A* to refresh my memory on how it worked.
The solution that I arrived at for the rewrite was to use Dijkstra’s algorithm to pre-compute a map of what the minimum possible costs would be from each tile to the goal, if adjacent tiles could be moved to without restriction. I then used that as the heuristic for A*. While I was writing this, the original part 2 program did finish and gave the correct answer. Since I was already this far in though, I figured I’d finish the rewrite anyway.
The new program got the wrong answer, but did so very quickly. It turned out that I had a bug in my Dijkstra map. I was sorting the node queue by the currently computed cost to move from that node to the goal, when it instead should have been sorted by that plus the cost to enter that node from a neighbor. Since the node at the head of the queue is removed and marked as finalized on each iteration, some nodes were being finalized before their actual minimum costs were found.
When using the A* algorithm, you usually want your heuristic cost estimate to underestimate the actual cost to reach the goal from a given node. If it overestimates instead, the algorithm will overlook routes that are potentially more optimal than the computed route. This can be useful if you want to find a “good enough” route quickly, but in this case we need the actual best path.
The rest of the code: