USB C is the best thing to happen to peripherals since the mouse.
I would agree with you if there were a simple way to tell what the USB-C cable I have in my hand can be used for without knowing beforehand. Otherwise, for example, I don’t know whether the USB-C cable will charge my device or not. There should have been a simple way to label them for usage that was baked into the standard. As it is, the concept is terrific, but the execution can be extremely frustrating.
If anyone disagrees with this, the original USB spec was for a reversible connector and the only reason we didn’t get to have that the whole time was because they wanted to increase profit margins.
That’s the reason Apple released the Lightning connector. They pushed for several features for USB around 2010, including a reversible connector, but the USB-IF refused. Apple wanted USB-C, but couldn’t wait for the USB-IF to come to an agreement so they could replace the dated 20-pin connector.
Buying a basic, no-frills USB-C cable from a reputable tech manufacturer all but guarantees that it’ll work for essentially any purpose. Of course the shoddy pack-in cables included with a cheap device purchase won’t work well.
I replaced every USB-C-to-C or -A-to-C cable and brick in my house and carry bag with a very low cost Anker cable (except the ones that came with my Google products, those are fine), and now anything charges on any cable.
You wouldn’t say that a razor sucked just because the cheap replacement blades you bought at the dollar store nicked your face, or that a pan was too confusing because the dog food you cooked in it didn’t taste good. So too it is not the fault of USB-C that poorly manufactured charging bricks and cables exist. The standard still works; in fact, it works so well that unethical companies are flooding the market with crap.
IDK I’ve had PD cables that looked good for a while but turns out their data rate was basically USB2. It seems no matter what rule of thumb I try there are always weird caveats.
Correct. The other commenter is giving bad advice.
Both power delivery and bandwidth are backwards compatible, but they are independent specifications on USB-C cables. You can even get PD capable USB-C cables that don’t transmit data at all.
Also, that’s not true for Thunderbolt cables. Each of the 5 versions have specific data and power delivery minimum and maximum specifications.
You can even get PD capable USB-C cables that don’t transmit data at all.
I don’t think this is right. The PD standard requires the negotiation of which side is the source and which is the sink, and the voltage/amperage, over those data links. So it has to at least support the bare minimum data transmission in order for PD to work.
Technically, yes, data must transmit to negotiate, but it doesn’t require high throughput. So you’ll get USB 2.0 transfer speeds (480 Mb/s) with most “charging only” USB-C cables. That’s only really useful for a keyboard or mouse these days.
This limitation comes up sometimes when people try to build out a zero-trust cable where they can get a charge but not necessarily transfer data to or from an untrusted device on the other side.
True but pretty much the only devices that need those are high-end SSDs and laptop docks and in both cases you just leave the cable with the device rather than pulling it out of your generic cables drawer.
There should have been a simple way to label them for usage that was baked into the standard.
There is. USB IF provides an assortment of logos and guidelines for ports and cables to clearly mark data speed (like “10Gbps”), power output (like “100W” or “5A”), whether the port is used for charging (battery icon), etc. But most manufacturers choose not to actually use them for ports.
Cables I’ve seen usually are a bit better about labeling. I have some from Anker and ugreen that say "SS”, “10Gbps”, or “100W”. If they don’t label the power it’s probably 3A and if they don’t label the data speed it’s usually USB 2.0, though I have seen a couple cables that support 3.0 and don’t label it.
The really janky ones you get with like USB gadgets like fans only have the 2 power lines hooked up and not the lines needed to communicate PD support, those will work exactly the same as the same janky USB A-microUSB cables they used to come with, supplying 5V/2A. You throw those away the second you get them and replace them with the decent quality cables you bought in bulk from AmazonBasics or something.
Nope. My daughter is notorious for mixing up cables when they come out of the brick. Some charge her tablet, some are for data transfer, some charge other devices but not her tablet. It’s super confusing. I had to start labeling them for her.
Come to think of it, all the USB C cables I have are from phone and device chargers so I just took it for granted. Good to know. Thanks for sharing some knowledge with me
USB-c cables can vary drastically. Power delivery alone ranges from less than 1 amp at 5 volts to over 5 amps at 20 volts. That’s 5 watts of power on the low end to 100 watts of power on the high end and sometimes more. When a cable meant to run at 5 watts has over 100 watts of power run through, the wires get really hot and could catch fire. The charger typically needs to talk to a very small chip in the high power cables for the cables to say, yes I can handle the power. Really cheap chargers might just push that power out regardless. So while the USB-c form factor is the one plug to rule them all, the actual execution is a fucking mess.
Yeah, I totally get that there is a need for cheap power only cables, but why are there what feels like 30 different data “standards”. Just gimme power-only, data, and fast-data. And yeah, in 2 years there’ll be a faster data protocol, so what, that’s then fast-data24, fast-data26, etc. and manufacturers have to use a specific pictogram to label them according to the highest standard they fulfill.
Damn, check out the price of the thing someone else linked to at AliExpress for a fraction of that price. But having to spend money on that should not be necessary.
That aliexpress device doesn’t tell you what wattage or data speed the cable will max out doing. Just what wattage it’s currently doing (to which you’d need to make sure that the device you’re testing with on the other side is capable and not having it’s own issues). Also can’t tell you if the cable is have intermittent problems. If all you care about is wattage, then fine. But I find myself caring more about the supported data speeds and quality of the cable.
But yes, I agree that cables should just be marked what they’re rated for… However it’s possible well built cables exceed that spec and could do better than they’re claiming which just puts us in the same boat of not really knowing.
Edit: oh! and that aliexpress tester is only 4 lines(usb2.0 basically)… usb 3.0 in type c is 24 pins… You’re not testing jack shit on that aliexpress. The device I linked will actually map the pins in the cable and will find you breaks as well.
The cheaper aliexpress item you actually want is this one, it will read the emarker and tell you the power/data rates it supports, if it supports thunderbolt etc https://www.aliexpress.com/item/1005007287415216.html
I agree with USB-C, but there are still a million USB-A devices I need to use, and I can’t be bothered to buy adapters for all of them. And a USB hub is annoying.
Plus, having 1-2 USB-C ports only is never gonna be enough. If they are serious about it, why not have 5?
It really is for me. Those things stick out way too far and might work alright in stationary mode, but while on the go they break easily (speaking from experience) and slip out all the time.
That’s still only 3 simultaneously if I saw that right. My old Lenovo laptop had 3 USB-A 2.0 ports, 2 x USB-A 3.0, RJ45 and HDMI. That was gold. Everything that comes now is a bloody chore.
You can’t buy a UCB-C Wifi dongle that last time I checked. You have to buy a c-to-a adapter, then use a usb-a wifi dongle. It’s nuts that those don’t exist.
Pinetab2 shipped with a wifi chip without any Linux drivers. The drivers eventually got made, but before that, you needed a USB dongle with Ethernet or a adapter.
I would also like a USB-c wifi dongle for tech support reasons. Sometimes, the wifi hardware fails and you need a quick replacement to figure out what happened.
Some applications need very specific drivers and protocols that aren’t compatible with normal chips. Or you have to connect to a device via WiFi but still need internet.
Also long range WiFi antennas are amazing.
Maybe the preferred Linux distro doesn’t work with them. I had to use another distro for a while because Debian didn’t immediately support the card, but there are apparently cases where the internal card just permanently wouldn’t work (like in fully free software distros). I would rather replace the card inside the laptop than use a dongle, but idk if this can always be the answer.
I agree with OP and I haven’t used a tiling WM in years (used XMonad BTW; i3 was okay). I currently use KDE Plasma 6 because it doesn’t have many drawbacks (used GNOME until Wayland worked properly on KDE), and I can use it pretty well w/o a mouse.
Even for like 20 years after mousing became the primary interface, you could still navigate much faster using keyboard shortcuts / accelerator keys. Application designers no longer consider that feature. Now you are obliged to constantly take your fingers off home position, find the mouse, move it 3cm, aim it carefully, click, and move your hand back to home position, an operation taking a couple of seconds or more, when the equivalent keyboard commands could have been issued in a couple hundred milliseconds.
I don’t think mice were a mistake, but they’re worse for most of the tasks I do. I’m a software engineer and I suck at art, so I just need to write, compile, and test code.
There are some things a mouse is way better for:
drawing (well, a drawing tablet is better)
3d modeling
editing photos
first person shooters (KB works fine for OG Doom though)
bulk file operations (a decent KB interface could work though)
But for almost everything else, I prefer a keyboard.
And while we’re on a tangent, I hate WASD, why shift my fingers over from the normal home row position? It should be ESDF, which feels way more natural…
Thanks, I got you beat on ESDF though because i’m a RDFG man, since playing counter strike 1.6. With WASD they usually put crouch or something on ctrl but my pinky has a hard time stretching down there, but on RDFG my pinky has easy access to QW AS ZX, and tab caps and shift with a little stretch. It’s come in handy when playing games with a lot of keybinds.
What pisses me off even more is many games bind to the letter instead of physical key position (e.g. key code), so alternative layouts get a big middle finger. I use Dvorak, and I’ve quit fighting and just switch to QWERTY for games.
I don’t have a problem with hitting control (I guess I have big hands), but I totally agree that default key binds largely suck. I wish games came with a handful of popular ones, and bound to key codes so hs Dvorak users (or international users) didn’t have to keep switching to QWERTY.
I always rebind to ESDF if the game doesn’t do stupid things preventing it from being practical. The addition of the 1QAZ strip being available to the pinky is a killer feature all on its own. I typically use that for weapon switching, instead of having to stretch up to 1234 and take my fingers off the movement keys.
Tablets are better than mice at drawing, modelling, and photo editing. Mice are good for first person shooters. Game controllers are better for most other games. You can mouse in dired-mode i guess, if you’re a casual.
The problem is they generally use E and F for something, which results in a cascade of rebinding.
And yeah, tablets are better, but they’re also more expensive and don’t do other mice things. For how rarely I do 3D modeling and whatnot (pretty rare), making sure my mouse has a middle button is plenty.
And yeah, I much prefer controller, even for FPS since I don’t play competitively (even then, I’ve seen awesome videos about gyro aiming).
E and F is certainly is a problem, but developing your own custom key map is almost always part of a larger process of becoming more effective anyway. Typically I start by just moving all left-hand bindings right by one key.
I feel like the mouse is a good generalist, jack of all trades input device, but outside of fps, I feel that any task that quote requires unquote a mouse is done better with a tablet. They are of equivalent price, honestly. Mice are not cheap, tablets are not expensive.
Right now I am using voice dictation because it is better than typing on a phone, but oh my God it sucks so bad.
It’s also an age thing. My visual processing is getting worse and worse. My disorientation facing a busy screen with literally thousands of objects that can be interacted with by mouse is a cognitive drain compared to a textual interface where I do most of the work abstractly without having to use visual processing at all. Like reading a book vs watching a movie.
I probably have a lot more experience using pre-mouse era computers than most people. It’s like being asked to start using a different language when you are 20. Yeah, you’ll become perfectly fluent for a couple decades… but you’ll also lose that language first when you get old.
I have noticed that millenials navigate multilayer mouse interfaces (like going down a few chained drop down menus) way faster than I ever did. And zoomers use touch screen keyboards almost as well as I ever touchtyped. Brains are only plastic to a degree, and it just plain feels good to use all those neurons that you first laid down when you were young and your mind was infinite.
I love trackballs (except that Kensington above. It was basically a pinch your skin torture device.) I still use the Logitech M570 trackball. It’s pretty good.
My favorite of all time though was the Logitech Trackman Vista. Absolutely perfect form factor that Logitech just gave up on one day and I will never know why.
That functionality (first necessary, then required by guidelines, then expected, and then still usual) disciplined UI designers to make things doable in a clear sequence of actions.
Now they think any ape can make a UI if it knows the new shiny buzzwords like “material design” or “air” or whatever. And they do! Except humans can’t use those UIs.
BTW, about their “air”. One can look at ancient UI paradigms, specifically SunView, OpenLook and Motif (I’m currently excited about Sun history again), Windows 3.*, and also Win9x (with WinXP being more or less inside the same paradigm). And one can see that of these only Motif had anything resembling their “air”. And Motif is generally considered clunky and less usable than the rest of the mentioned (I personally consider OpenLook the best), but compared to modern UIs even Motif does that “air” part the way it seems to make some sense, and feels less clunky, making me wonder how is that even possible.
FFS, modern UI designers don’t even think it’s necessary to clearly and consistently separate buttons and links from text.
And also - freedom in Web and UI design has proven to be a mistake. UIs should be native. Web browsers should display pages adaptively (we have such and such blocks of text and such and such links), their appearance should be decided on the client and be native too, except pictures. Gemini is the right way to go for the Web.
I feel your pain and irrelevancy with crystalline clarity. The world isn’t interested in doing things the right way, or even in a good way; consumers are too perversely enthralled by capital’s interests. I kind of hate that computers ever became a consumer good.
When I’m “computering” for efficiency, I don’t take my hands off the keyboard. Half of my job is on a standard keyboard, and so familiarizing myself with all the shortcuts and whatnot saves a lot of time versus having to travel back and forth to a mouse or track pad.
When I am just satisfying the dopamine urges, it’s mouse all the way.
Sure, it’s not 100% better in all situations. But when you’re unfamiliar with something, almost universally, it’s far more intuitive.
And this doesn’t even take into account things like gaming. I also can’t imagine trying to do visual design things solely with the computer. Like any type of drawing or schematic design.
Being pretty adept at using the keyboard, I’m often frustrated when I find out that the only way to do something is by mouse when there appears that there should be an easy way to do it by keyboard. But, man, I can’t imagine longing for the days before the mouse.
Yes, the mouse is useful in many situations (esp 3d modeling), so I don’t think anyone is arguing that it shouldn’t exist.
The problem, however, is that we’ve standardized on it for everything, to the point where software often ignores a better KB-driven workflow because the mouse one is good enough. “When all you have is a hammer…”
We’ve prioritized “intuitive” over “efficient.” There’s nothing wrong with learning to properly use a tool, and it’s sad that we don’t expect users to put in that modicum of effort. In the 80s and 90s, that’s just how things were, you either learn the tools (often with a handbook) or you don’t use them. The net result was a populace that didn’t need support as much, because they were used to reading the docs. If a component died, the docs would tell you how to diagnose and fix it. These days, those docs just don’t exist, so if the solution isn’t intuitive, you replace it (both hardware and software).
That’s where this frustration comes from. Making things intuitive also means reducing the average person’s understanding of their tools, and the mouse is a symptom of that shift.
I would argue, overall, it’s more efficient to aim for the former than the latter, especially if we are talking about the wide range of people who need to use a computer.
But I’m curious as to the “actions per minute” type of efficiency that people are talking about here. I’m an engineer, who has moved into computer programming. I would say the bottleneck for me is never that I have to move my hand to my mouse, but it’s always about thinking and planning. I feel like this “it’s so much more efficient” is viewing us as almost machines that are just trying to output actions, rather than think through and solve problems.
The net result was a populace that didn’t need support as much, because they were used to reading the docs. If a component died, the docs would tell you how to diagnose and fix it.
I think this is more of a problem that it went from an extremely niche thing, to something that almost everyone is required to use, rather than a move away from keyboard only. Or, maybe, the rise of the mouse opened the computer to everyone being able to use it, which is why it has become so ubiquitous.
To me it’s more about ergonomics. Most of my time is spent reading code and sending messages. I use ViM or at least ViM bindings for reading code because it’s so much nicer for navigating code than clicking and scrolling:
go to definition? - gd
find in file? - /query
match braces/quotes? - %
I’m not saying everyone should learn ViM, I’m just using it as an example. I’m much less concerned about maximizing my text entry speed and more interested in maximizing ergonomics of the tools I use the most every day. For me that’s my text editor and terminal, followed closely by my browser.
I have no problem with a good mouse UI (I love mouse mode in ViM), my problem is when there isn’t an alternative power user UX (shortcuts and whatnot).
This extends to a ton of things. Let’s say you want to search for a file, but the GUI indexed search isn’t working properly (maybe it didn’t index your file? Or maybe you need more than string contains?). If you’re comfortable on the CLI and understand regex, you’re set. Or maybe you need to do some bulk change across files, the CLI is going to be really efficient. It’s less about total productivity but not having to do stupid repetitive tasks because that’s my only option. I’d much rather write a script than do the repetitive thing even if the total time spent is equivalent.
People just aren’t learning the power user stuff these days and look at me like I’m a wizard because I can use tools written 40 years ago…
Sounds like I’m glad “home row” style typing fell out of favour. It may be the theoretically fastest way to type eventually, but it seems to lead to pretty rigid behaviour. Adapting to new things as they come along and changing your flow to move with them instead of against them is just a much more comfortable way to live. Even if I only type 80% as fast.
I have no idea what you mean by “fell out of favour”. Does your keyboard not have pips on F and J? People still touch type. Dunno what to tell you.
You’re getting hung up on “home row”. You still have to move your hand from the keyboard to the mouse and back. It’s the same problem, whether or not you know how to type well and stare at your hands, except now you have to add steps for “look at the screen” and “look back at your hands”.
Fell out of favour in that it isn’t taught as “the correct way to type” any more. Largely because most devices you type on now wouldn’t even have physical keys. So learning home row typing for the occasional time the thing you are typing on is a physical full sized keyboard just disrupts the flow of everything else.
Being perfectly optimal isn’t as productive as it feels, especially when it leads to resistance to change and adapt.
Yup. My kids learned how to type properly, and they’re in elementary school. And no, their teachers aren’t boomers, they’re a mix of millennials and gen z.
Hmm, is that a states thing then? Typing courses around here have capitulated on it. You can choose to learn it if max typing speed is the most important factor, but alternate forms of touch typing and muscle memory are fully accepted now. Often times just due to the varying amount of personal practice, the fastest typer in class isn’t even a home row kid.
But way back when I was in school, they constantly tried to force me to switch to home row, despite already having years of practice typing outside of school. I was already a faster typer than the teacher, so they had a hard time convincing me that their way was better. I eventually saw enough data on it to believe it, but I’m still glad I was unconvinced at the time. I still type fast enough to get any typing job, but I’m not so rigid and can use various types of keyboard equally well. Home row is very good at one thing, but it makes you prioritise that one thing too much. If you really wanted to type fast, but be limited to only one set of hardware, stenography is one step more in that direction.
Yes, it is taught. If you take a typing course, you will be taught to use home row. What you mean is, you were never taught to type because we don’t teach that in school anymore. If you do most your typing on a touch screen, I have to imagine: you are so young. In 20 years when no one is using a touch screen to enter text anymore (but likely still use physical keyboards), you will remember this conversation, and have some greater insight.
Whether or not you know how to touch type, in any situation where there is a mouse, THERE IS A PHYSICAL KEYBOARD. Not knowing how to touch type just makes the task switching overhead greater.
I’ve used ion, ratpoison, i3, sawfish, and other tiling window managers for fifteen or more years, all totaled up. There is a great deal of pressure to use a modern desktop environment and it’s a lot of work maintaining my janky bespoke desktop environment functions necessary for a few critical applications. I use KDE’s tiling features and keyboard shortcuts, but it’s a double edged sword because I have to disable all window manager bindings in (for example) Blender and emacs to avoid shadowing important features. Actually, I have re-implemented a lot of my custom KDE shortcuts as emacs bindings as well, so they still work when emacs has the focus. Here’s one:
For my wm+Emacs work, I unified the shortcuts by calling a separate go bin that checks if the active window is Emacs or not. If it is, it sends the command to the Emacs Daemon. If it’s not it sends the command to i3. For directional commands like move focus, first check it there’s an Emacs window to that side, if not send the command to i3.
To an extent. Early 90’s I could navigate WordPerfect in DOS faster than I’ve ever been able to work in MS Word, because it was all keyboard even before I learned proper home key 10 finger typing in high school. Technically my first word processor was Wordstar on one of those Osborne “portable” computers with the 5-inch screen when I was a young kid, but Wordperfect was what I did my first real ‘word processing’ on when I started using it for school projects. So I might just be older in that ‘how do you do fellow kids’ in this sort of discussion.
To this day, I still prefer mc (Midnight Commander, linux flavored recreation of Norton Commander that does have a Windows port (YMMV on the win port)) to navigate filesystems for non-automated file management.
I’ve been thoroughly conditioned for mouse use since the mid-late 90s (I call it my Warcraft-Quake era, we still used keyboard only for Doom 1/2 back in the early days), and I feel like it’s a crutch when I’m trying to do productive work instead of gaming. When I spend a few days working using remote shells, I definitely notice a speed increase. Then a few days later I lose it all again when I’m back on that mouse cursor flow brain.
I call it my Warcraft-Quake era, we still used keyboard only for Doom 1/2 back in the early days
This is my main reason for not pining for the days before the mouse: it made gaming 100000x better. I remember when we first started playing quake, a lot of the guys swore by the keyboard only, until I regularly destroyed them with the mouse. . .and they all switched over.
I’ve also done a lot of graphic design, photo-editing, schematic design, etc. . . and can’t imagine having to do that solely with the keyboard (but again, I’m often like “why isn’t there a keyboard shortcut for this?”).
Also, when it comes to productivity, I guess it depends on what you are doing because usually my big hurdle is not how quickly I can do actions (that is usually more important in video games, tbh), the big hurdle is sitting down and thinking about how to do it correctly.
Has the keyboard and mouse versus controller argument finally died? I mean I use a controller for things like Elden Ring and keyboard and mouse for first person or tactics/strategy games.
We proved twenty years ago that keyboard and mouse was better for first person gaming and I was still hearing arguments that controllers were better five years ago.
I have a game controller and a mouse. I don’t use my game controller to code. I don’t use my keyboard to sculpt. The problem isn’t that mice exist at all, its that they are overwhelmingly dominant to the point where most applications do not cater to anything else.
Now I see your original point in a new light. I just viewed it as a natural progression that the mouse would take over as the primary input because of it’s useful and intuitiveness. So when you say you “hated” this, I interpreted as a hate for mice in general and the wishing for pre-mice days. Rather than just a move back towards the keyboard being the primary interface.
Nah, USB-A was the best since it replaced serial ports (esp PS/2, which was much harder to plug in) and outlived/outclassed FireWire. USB-C is the best thing since HDM (screw you VGA amd DVI), which was the best since USB-A.
Fuck firewire. Glad it’s dead. USB C is the best thing to happen to peripherals since the mouse.
I would agree with you if there were a simple way to tell what the USB-C cable I have in my hand can be used for without knowing beforehand. Otherwise, for example, I don’t know whether the USB-C cable will charge my device or not. There should have been a simple way to label them for usage that was baked into the standard. As it is, the concept is terrific, but the execution can be extremely frustrating.
Hey that’s a fair point. Funny how often good ideas are kneecapped by crap executions.
I’m pretty sure the phrase “kneecapped by crap executions” is in the USB working groups’s charter. It’s like one of their core guiding principles.
If anyone disagrees with this, the original USB spec was for a reversible connector and the only reason we didn’t get to have that the whole time was because they wanted to increase profit margins.
USB has always been reversible. In fact you have to reverse it at least 3 times before it’ll FUCKING PLUG IN.
That’s the reason Apple released the Lightning connector. They pushed for several features for USB around 2010, including a reversible connector, but the USB-IF refused. Apple wanted USB-C, but couldn’t wait for the USB-IF to come to an agreement so they could replace the dated 20-pin connector.
I’m sure they were mortified they needed to release a proprietary connector
Buying a basic, no-frills USB-C cable from a reputable tech manufacturer all but guarantees that it’ll work for essentially any purpose. Of course the shoddy pack-in cables included with a cheap device purchase won’t work well.
I replaced every USB-C-to-C or -A-to-C cable and brick in my house and carry bag with a very low cost Anker cable (except the ones that came with my Google products, those are fine), and now anything charges on any cable.
You wouldn’t say that a razor sucked just because the cheap replacement blades you bought at the dollar store nicked your face, or that a pan was too confusing because the dog food you cooked in it didn’t taste good. So too it is not the fault of USB-C that poorly manufactured charging bricks and cables exist. The standard still works; in fact, it works so well that unethical companies are flooding the market with crap.
Burn all the USBC cables with fire except PD. The top PD cable does everything the lower cable does.
IDK I’ve had PD cables that looked good for a while but turns out their data rate was basically USB2. It seems no matter what rule of thumb I try there are always weird caveats.
No, I’m not bitter, why would you ask that?
There are many PD cables that are bad for doing data.
Correct. The other commenter is giving bad advice.
Both power delivery and bandwidth are backwards compatible, but they are independent specifications on USB-C cables. You can even get PD capable USB-C cables that don’t transmit data at all.
Also, that’s not true for Thunderbolt cables. Each of the 5 versions have specific data and power delivery minimum and maximum specifications.
I don’t think this is right. The PD standard requires the negotiation of which side is the source and which is the sink, and the voltage/amperage, over those data links. So it has to at least support the bare minimum data transmission in order for PD to work.
Technically, yes, data must transmit to negotiate, but it doesn’t require high throughput. So you’ll get USB 2.0 transfer speeds (480 Mb/s) with most “charging only” USB-C cables. That’s only really useful for a keyboard or mouse these days.
This limitation comes up sometimes when people try to build out a zero-trust cable where they can get a charge but not necessarily transfer data to or from an untrusted device on the other side.
You forgot thunderbolt and usb4 exists now
True but pretty much the only devices that need those are high-end SSDs and laptop docks and in both cases you just leave the cable with the device rather than pulling it out of your generic cables drawer.
You can buy a single cable that does 40GB and USB4 and charges at 240w.
There is. USB IF provides an assortment of logos and guidelines for ports and cables to clearly mark data speed (like “10Gbps”), power output (like “100W” or “5A”), whether the port is used for charging (battery icon), etc. But most manufacturers choose not to actually use them for ports.
Cables I’ve seen usually are a bit better about labeling. I have some from Anker and ugreen that say "SS”, “10Gbps”, or “100W”. If they don’t label the power it’s probably 3A and if they don’t label the data speed it’s usually USB 2.0, though I have seen a couple cables that support 3.0 and don’t label it.
Do not all USB C cables have the capability to do Power Delivery? I thought it was up to the port you plugged it in to support it?
The really janky ones you get with like USB gadgets like fans only have the 2 power lines hooked up and not the lines needed to communicate PD support, those will work exactly the same as the same janky USB A-microUSB cables they used to come with, supplying 5V/2A. You throw those away the second you get them and replace them with the decent quality cables you bought in bulk from AmazonBasics or something.
Nope. My daughter is notorious for mixing up cables when they come out of the brick. Some charge her tablet, some are for data transfer, some charge other devices but not her tablet. It’s super confusing. I had to start labeling them for her.
Come to think of it, all the USB C cables I have are from phone and device chargers so I just took it for granted. Good to know. Thanks for sharing some knowledge with me
USB-c cables can vary drastically. Power delivery alone ranges from less than 1 amp at 5 volts to over 5 amps at 20 volts. That’s 5 watts of power on the low end to 100 watts of power on the high end and sometimes more. When a cable meant to run at 5 watts has over 100 watts of power run through, the wires get really hot and could catch fire. The charger typically needs to talk to a very small chip in the high power cables for the cables to say, yes I can handle the power. Really cheap chargers might just push that power out regardless. So while the USB-c form factor is the one plug to rule them all, the actual execution is a fucking mess.
Agreed. They should be labeled with the rating.
This little guy works wonders for me.
https://www.aliexpress.com/item/1005002371533933.html
Oh very cool! And you can’t beat that price. Thanks.
No problem! Oh, and use a charger/power supply for the input. It’ll work on a computer port, but I wouldn’t recommend it.
Yeah, I wouldn’t trust it on a computer port. I’d just plug it into a power brick.
Yeah, I totally get that there is a need for cheap power only cables, but why are there what feels like 30 different data “standards”. Just gimme power-only, data, and fast-data. And yeah, in 2 years there’ll be a faster data protocol, so what, that’s then fast-data24, fast-data26, etc. and manufacturers have to use a specific pictogram to label them according to the highest standard they fulfill.
https://caberqu.com/home/39-ble-caberqu-0611816327412.html
This would do it.
Damn, check out the price of the thing someone else linked to at AliExpress for a fraction of that price. But having to spend money on that should not be necessary.
That aliexpress device doesn’t tell you what wattage or data speed the cable will max out doing. Just what wattage it’s currently doing (to which you’d need to make sure that the device you’re testing with on the other side is capable and not having it’s own issues). Also can’t tell you if the cable is have intermittent problems. If all you care about is wattage, then fine. But I find myself caring more about the supported data speeds and quality of the cable.
But yes, I agree that cables should just be marked what they’re rated for… However it’s possible well built cables exceed that spec and could do better than they’re claiming which just puts us in the same boat of not really knowing.
Edit: oh! and that aliexpress tester is only 4 lines(usb2.0 basically)… usb 3.0 in type c is 24 pins… You’re not testing jack shit on that aliexpress. The device I linked will actually map the pins in the cable and will find you breaks as well.
The cheaper aliexpress item you actually want is this one, it will read the emarker and tell you the power/data rates it supports, if it supports thunderbolt etc https://www.aliexpress.com/item/1005007287415216.html
Some photos of it in action https://bitbang.social/@kalleboo/109391700632886806
I agree with USB-C, but there are still a million USB-A devices I need to use, and I can’t be bothered to buy adapters for all of them. And a USB hub is annoying.
Plus, having 1-2 USB-C ports only is never gonna be enough. If they are serious about it, why not have 5?
Yeah, I’d love at least one USB A type cause most of the peripherals I own use that.
It’s not that bad
It really is for me. Those things stick out way too far and might work alright in stationary mode, but while on the go they break easily (speaking from experience) and slip out all the time.
What does ‘anti-top shell design’ mean?
An anti-top-shell design is aimed at preventing the accumulation of debris on the top surface
I bought some adaptors in China for around $0.50 each. It really isn’t that big of a deal
It really is a big deal for me, they stick out too far and are making the whole setup flimsy.
Then just buy a framework like I did and switch ports whenever you feel like it
That’s still only 3 simultaneously if I saw that right. My old Lenovo laptop had 3 USB-A 2.0 ports, 2 x USB-A 3.0, RJ45 and HDMI. That was gold. Everything that comes now is a bloody chore.
You can have 6 ports of any kind you like on the framework 16
Oh nice, that’s something.
You can’t buy a UCB-C Wifi dongle that last time I checked. You have to buy a c-to-a adapter, then use a usb-a wifi dongle. It’s nuts that those don’t exist.
Genuine question - what device do you have that has USB-C ports, no USB-A ports, doesn’t have WiFi, but supports the dongle?
Pinetab2 shipped with a wifi chip without any Linux drivers. The drivers eventually got made, but before that, you needed a USB dongle with Ethernet or a adapter.
I would also like a USB-c wifi dongle for tech support reasons. Sometimes, the wifi hardware fails and you need a quick replacement to figure out what happened.
Why do you need a wifi dongle when wifi is built into every single laptop sold?
Some applications need very specific drivers and protocols that aren’t compatible with normal chips. Or you have to connect to a device via WiFi but still need internet. Also long range WiFi antennas are amazing.
My first thought was hacking.
As I said, specific “applications” :D
Maybe the preferred Linux distro doesn’t work with them. I had to use another distro for a while because Debian didn’t immediately support the card, but there are apparently cases where the internal card just permanently wouldn’t work (like in fully free software distros). I would rather replace the card inside the laptop than use a dongle, but idk if this can always be the answer.
I hated when mice became the primary interface to computers, and I still do.
tell me you use i3 without telling me you use i3
I agree with OP and I haven’t used a tiling WM in years (used XMonad BTW; i3 was okay). I currently use KDE Plasma 6 because it doesn’t have many drawbacks (used GNOME until Wayland worked properly on KDE), and I can use it pretty well w/o a mouse.
Is this for real?
Even for like 20 years after mousing became the primary interface, you could still navigate much faster using keyboard shortcuts / accelerator keys. Application designers no longer consider that feature. Now you are obliged to constantly take your fingers off home position, find the mouse, move it 3cm, aim it carefully, click, and move your hand back to home position, an operation taking a couple of seconds or more, when the equivalent keyboard commands could have been issued in a couple hundred milliseconds.
I love how deeply nerdy Lemmy is. I’m a bit of a nerd but I’m not “mice were a mistake” nerd.
I don’t think mice were a mistake, but they’re worse for most of the tasks I do. I’m a software engineer and I suck at art, so I just need to write, compile, and test code.
There are some things a mouse is way better for:
But for almost everything else, I prefer a keyboard.
And while we’re on a tangent, I hate WASD, why shift my fingers over from the normal home row position? It should be ESDF, which feels way more natural…
Thanks, I got you beat on ESDF though because i’m a RDFG man, since playing counter strike 1.6. With WASD they usually put crouch or something on ctrl but my pinky has a hard time stretching down there, but on RDFG my pinky has easy access to QW AS ZX, and tab caps and shift with a little stretch. It’s come in handy when playing games with a lot of keybinds.
Pfff, minutes after trying to minimize your nerdiness, you post this confession.
lol
What pisses me off even more is many games bind to the letter instead of physical key position (e.g. key code), so alternative layouts get a big middle finger. I use Dvorak, and I’ve quit fighting and just switch to QWERTY for games.
I don’t have a problem with hitting control (I guess I have big hands), but I totally agree that default key binds largely suck. I wish games came with a handful of popular ones, and bound to key codes so hs Dvorak users (or international users) didn’t have to keep switching to QWERTY.
That feel when you switch languages to chat and the hotkeys don’t work
I always rebind to ESDF if the game doesn’t do stupid things preventing it from being practical. The addition of the 1QAZ strip being available to the pinky is a killer feature all on its own. I typically use that for weapon switching, instead of having to stretch up to 1234 and take my fingers off the movement keys.
Tablets are better than mice at drawing, modelling, and photo editing. Mice are good for first person shooters. Game controllers are better for most other games. You can mouse in
dired-mode
i guess, if you’re a casual.The problem is they generally use E and F for something, which results in a cascade of rebinding.
And yeah, tablets are better, but they’re also more expensive and don’t do other mice things. For how rarely I do 3D modeling and whatnot (pretty rare), making sure my mouse has a middle button is plenty.
And yeah, I much prefer controller, even for FPS since I don’t play competitively (even then, I’ve seen awesome videos about gyro aiming).
E and F is certainly is a problem, but developing your own custom key map is almost always part of a larger process of becoming more effective anyway. Typically I start by just moving all left-hand bindings right by one key.
I feel like the mouse is a good generalist, jack of all trades input device, but outside of fps, I feel that any task that quote requires unquote a mouse is done better with a tablet. They are of equivalent price, honestly. Mice are not cheap, tablets are not expensive.
Right now I am using voice dictation because it is better than typing on a phone, but oh my God it sucks so bad.
I am using ESDF because my “A” key stopped being as responsive, didn’t expect someone to do this on purpose!
It’s just more ergonomic. My hands are already there, why shift them? Oh, and use QAZ instead of Tab, Shift, and Ctrl, they’re right there.
It’s also an age thing. My visual processing is getting worse and worse. My disorientation facing a busy screen with literally thousands of objects that can be interacted with by mouse is a cognitive drain compared to a textual interface where I do most of the work abstractly without having to use visual processing at all. Like reading a book vs watching a movie.
I probably have a lot more experience using pre-mouse era computers than most people. It’s like being asked to start using a different language when you are 20. Yeah, you’ll become perfectly fluent for a couple decades… but you’ll also lose that language first when you get old.
I have noticed that millenials navigate multilayer mouse interfaces (like going down a few chained drop down menus) way faster than I ever did. And zoomers use touch screen keyboards almost as well as I ever touchtyped. Brains are only plastic to a degree, and it just plain feels good to use all those neurons that you first laid down when you were young and your mind was infinite.
I just use a mouse to type in stuff using the on screen keyboard. It’s annoying having to take the ball out and clean it, but you get used to it.
I used the logitech optical trackball mouse for quite a few years! Did not play a lot of FPS a that time…
I love trackballs (except that Kensington above. It was basically a pinch your skin torture device.) I still use the Logitech M570 trackball. It’s pretty good.
My favorite of all time though was the Logitech Trackman Vista. Absolutely perfect form factor that Logitech just gave up on one day and I will never know why.
Looks like it’s a finger ball, rather than the thumb ball Logitech usually favours? Middle finger on LMB, ring finger on RMB? Ohhh, THUMB on LMB.
Still use one to control the PC when i’m in bed. =)
US military: “Perfection.”
Hey they made new technology where you can just yell at the computer and it’ll understand 60% of what you’re saying.
Reminds me of the ancient technology where you just kick it until you get a more tractable problem.
I kept every mouse ball I ever obtained and display them in my china cabinet.
That functionality (first necessary, then required by guidelines, then expected, and then still usual) disciplined UI designers to make things doable in a clear sequence of actions.
Now they think any ape can make a UI if it knows the new shiny buzzwords like “material design” or “air” or whatever. And they do! Except humans can’t use those UIs.
BTW, about their “air”. One can look at ancient UI paradigms, specifically SunView, OpenLook and Motif (I’m currently excited about Sun history again), Windows 3.*, and also Win9x (with WinXP being more or less inside the same paradigm). And one can see that of these only Motif had anything resembling their “air”. And Motif is generally considered clunky and less usable than the rest of the mentioned (I personally consider OpenLook the best), but compared to modern UIs even Motif does that “air” part the way it seems to make some sense, and feels less clunky, making me wonder how is that even possible.
FFS, modern UI designers don’t even think it’s necessary to clearly and consistently separate buttons and links from text.
And also - freedom in Web and UI design has proven to be a mistake. UIs should be native. Web browsers should display pages adaptively (we have such and such blocks of text and such and such links), their appearance should be decided on the client and be native too, except pictures. Gemini is the right way to go for the Web.
I feel your pain and irrelevancy with crystalline clarity. The world isn’t interested in doing things the right way, or even in a good way; consumers are too perversely enthralled by capital’s interests. I kind of hate that computers ever became a consumer good.
When I’m “computering” for efficiency, I don’t take my hands off the keyboard. Half of my job is on a standard keyboard, and so familiarizing myself with all the shortcuts and whatnot saves a lot of time versus having to travel back and forth to a mouse or track pad.
When I am just satisfying the dopamine urges, it’s mouse all the way.
Sure. It’s a lowest common denominator interface. With all that comes with that.
Lowest common denominator interface is definitely touch screen, then maybe game controllers but I love those for some games.
Edit - TV remotes!
Sure, it’s not 100% better in all situations. But when you’re unfamiliar with something, almost universally, it’s far more intuitive.
And this doesn’t even take into account things like gaming. I also can’t imagine trying to do visual design things solely with the computer. Like any type of drawing or schematic design.
Being pretty adept at using the keyboard, I’m often frustrated when I find out that the only way to do something is by mouse when there appears that there should be an easy way to do it by keyboard. But, man, I can’t imagine longing for the days before the mouse.
Yes, the mouse is useful in many situations (esp 3d modeling), so I don’t think anyone is arguing that it shouldn’t exist.
The problem, however, is that we’ve standardized on it for everything, to the point where software often ignores a better KB-driven workflow because the mouse one is good enough. “When all you have is a hammer…”
We’ve prioritized “intuitive” over “efficient.” There’s nothing wrong with learning to properly use a tool, and it’s sad that we don’t expect users to put in that modicum of effort. In the 80s and 90s, that’s just how things were, you either learn the tools (often with a handbook) or you don’t use them. The net result was a populace that didn’t need support as much, because they were used to reading the docs. If a component died, the docs would tell you how to diagnose and fix it. These days, those docs just don’t exist, so if the solution isn’t intuitive, you replace it (both hardware and software).
That’s where this frustration comes from. Making things intuitive also means reducing the average person’s understanding of their tools, and the mouse is a symptom of that shift.
I would argue, overall, it’s more efficient to aim for the former than the latter, especially if we are talking about the wide range of people who need to use a computer.
But I’m curious as to the “actions per minute” type of efficiency that people are talking about here. I’m an engineer, who has moved into computer programming. I would say the bottleneck for me is never that I have to move my hand to my mouse, but it’s always about thinking and planning. I feel like this “it’s so much more efficient” is viewing us as almost machines that are just trying to output actions, rather than think through and solve problems.
I think this is more of a problem that it went from an extremely niche thing, to something that almost everyone is required to use, rather than a move away from keyboard only. Or, maybe, the rise of the mouse opened the computer to everyone being able to use it, which is why it has become so ubiquitous.
To me it’s more about ergonomics. Most of my time is spent reading code and sending messages. I use ViM or at least ViM bindings for reading code because it’s so much nicer for navigating code than clicking and scrolling:
query
I’m not saying everyone should learn ViM, I’m just using it as an example. I’m much less concerned about maximizing my text entry speed and more interested in maximizing ergonomics of the tools I use the most every day. For me that’s my text editor and terminal, followed closely by my browser.
I have no problem with a good mouse UI (I love mouse mode in ViM), my problem is when there isn’t an alternative power user UX (shortcuts and whatnot).
This extends to a ton of things. Let’s say you want to search for a file, but the GUI indexed search isn’t working properly (maybe it didn’t index your file? Or maybe you need more than string contains?). If you’re comfortable on the CLI and understand regex, you’re set. Or maybe you need to do some bulk change across files, the CLI is going to be really efficient. It’s less about total productivity but not having to do stupid repetitive tasks because that’s my only option. I’d much rather write a script than do the repetitive thing even if the total time spent is equivalent.
People just aren’t learning the power user stuff these days and look at me like I’m a wizard because I can use tools written 40 years ago…
I feel that.
Sounds like I’m glad “home row” style typing fell out of favour. It may be the theoretically fastest way to type eventually, but it seems to lead to pretty rigid behaviour. Adapting to new things as they come along and changing your flow to move with them instead of against them is just a much more comfortable way to live. Even if I only type 80% as fast.
I have no idea what you mean by “fell out of favour”. Does your keyboard not have pips on F and J? People still touch type. Dunno what to tell you.
You’re getting hung up on “home row”. You still have to move your hand from the keyboard to the mouse and back. It’s the same problem, whether or not you know how to type well and stare at your hands, except now you have to add steps for “look at the screen” and “look back at your hands”.
Fell out of favour in that it isn’t taught as “the correct way to type” any more. Largely because most devices you type on now wouldn’t even have physical keys. So learning home row typing for the occasional time the thing you are typing on is a physical full sized keyboard just disrupts the flow of everything else.
Being perfectly optimal isn’t as productive as it feels, especially when it leads to resistance to change and adapt.
Home row is absolutely still taught as the “correct” way to type. Source: kids are in elementary school
Yup. My kids learned how to type properly, and they’re in elementary school. And no, their teachers aren’t boomers, they’re a mix of millennials and gen z.
Hmm, is that a states thing then? Typing courses around here have capitulated on it. You can choose to learn it if max typing speed is the most important factor, but alternate forms of touch typing and muscle memory are fully accepted now. Often times just due to the varying amount of personal practice, the fastest typer in class isn’t even a home row kid.
But way back when I was in school, they constantly tried to force me to switch to home row, despite already having years of practice typing outside of school. I was already a faster typer than the teacher, so they had a hard time convincing me that their way was better. I eventually saw enough data on it to believe it, but I’m still glad I was unconvinced at the time. I still type fast enough to get any typing job, but I’m not so rigid and can use various types of keyboard equally well. Home row is very good at one thing, but it makes you prioritise that one thing too much. If you really wanted to type fast, but be limited to only one set of hardware, stenography is one step more in that direction.
Yes, it is taught. If you take a typing course, you will be taught to use home row. What you mean is, you were never taught to type because we don’t teach that in school anymore. If you do most your typing on a touch screen, I have to imagine: you are so young. In 20 years when no one is using a touch screen to enter text anymore (but likely still use physical keyboards), you will remember this conversation, and have some greater insight.
Whether or not you know how to touch type, in any situation where there is a mouse, THERE IS A PHYSICAL KEYBOARD. Not knowing how to touch type just makes the task switching overhead greater.
My kids were taught in elementary school in like 1st grade, largely because using laptops (Chromebooks) is part of the curriculum.
So I see you clearly haven’t heard of i3, sway or hyperland …
I’ve used ion, ratpoison, i3, sawfish, and other tiling window managers for fifteen or more years, all totaled up. There is a great deal of pressure to use a modern desktop environment and it’s a lot of work maintaining my janky bespoke desktop environment functions necessary for a few critical applications. I use KDE’s tiling features and keyboard shortcuts, but it’s a double edged sword because I have to disable all window manager bindings in (for example) Blender and
emacs
to avoid shadowing important features. Actually, I have re-implemented a lot of my custom KDE shortcuts as emacs bindings as well, so they still work when emacs has the focus. Here’s one:(cl-flet ((switch-to (name) (lambda () (interactive) (shell-command (concat "wmctrl -a " name))))) (global-set-key (kbd "s-1") (switch-to "librewolf")) (global-set-key (kbd "s-2") (switch-to "konsole")) (global-set-key (kbd "s-3") (switch-to "signal")) (global-set-key (kbd "s-4") (switch-to "darktable")) (global-set-key (kbd "s-5") (switch-to "emacs")))
For my wm+Emacs work, I unified the shortcuts by calling a separate go bin that checks if the active window is Emacs or not. If it is, it sends the command to the Emacs Daemon. If it’s not it sends the command to i3. For directional commands like move focus, first check it there’s an Emacs window to that side, if not send the command to i3.
The things we have to go through just to meet basic needs.
How are you redirecting all input through your custom exec? Is that an i3 feature?
why have I made that anonymous function
interactive
??Edit: Oh I think anything you bind to a key has to be
interactive
.To an extent. Early 90’s I could navigate WordPerfect in DOS faster than I’ve ever been able to work in MS Word, because it was all keyboard even before I learned proper home key 10 finger typing in high school. Technically my first word processor was Wordstar on one of those Osborne “portable” computers with the 5-inch screen when I was a young kid, but Wordperfect was what I did my first real ‘word processing’ on when I started using it for school projects. So I might just be older in that ‘how do you do fellow kids’ in this sort of discussion.
To this day, I still prefer mc (Midnight Commander, linux flavored recreation of Norton Commander that does have a Windows port (YMMV on the win port)) to navigate filesystems for non-automated file management.
I’ve been thoroughly conditioned for mouse use since the mid-late 90s (I call it my Warcraft-Quake era, we still used keyboard only for Doom 1/2 back in the early days), and I feel like it’s a crutch when I’m trying to do productive work instead of gaming. When I spend a few days working using remote shells, I definitely notice a speed increase. Then a few days later I lose it all again when I’m back on that mouse cursor flow brain.
This is my main reason for not pining for the days before the mouse: it made gaming 100000x better. I remember when we first started playing quake, a lot of the guys swore by the keyboard only, until I regularly destroyed them with the mouse. . .and they all switched over.
I’ve also done a lot of graphic design, photo-editing, schematic design, etc. . . and can’t imagine having to do that solely with the keyboard (but again, I’m often like “why isn’t there a keyboard shortcut for this?”).
Also, when it comes to productivity, I guess it depends on what you are doing because usually my big hurdle is not how quickly I can do actions (that is usually more important in video games, tbh), the big hurdle is sitting down and thinking about how to do it correctly.
Has the keyboard and mouse versus controller argument finally died? I mean I use a controller for things like Elden Ring and keyboard and mouse for first person or tactics/strategy games.
We proved twenty years ago that keyboard and mouse was better for first person gaming and I was still hearing arguments that controllers were better five years ago.
I have a game controller and a mouse. I don’t use my game controller to code. I don’t use my keyboard to sculpt. The problem isn’t that mice exist at all, its that they are overwhelmingly dominant to the point where most applications do not cater to anything else.
Now I see your original point in a new light. I just viewed it as a natural progression that the mouse would take over as the primary input because of it’s useful and intuitiveness. So when you say you “hated” this, I interpreted as a hate for mice in general and the wishing for pre-mice days. Rather than just a move back towards the keyboard being the primary interface.
Early ’90s*
You got it right the second time though, champ!
You have passed the test. We can be friends.
Nah, USB-A was the best since it replaced serial ports (esp PS/2, which was much harder to plug in) and outlived/outclassed FireWire. USB-C is the best thing since HDM (screw you VGA amd DVI), which was the best since USB-A.
I agree, I would just like to have more of them.