Tuesday, June 25, 2024

Identity Field

The Identity Field has to be the most obvious pattern. As Martin Fowler says, it is "mind numbingly simple."



However, imagine a system that lacks it. Total nightmare to deal with, as you have to define complex identity rules (for example, is the name and location the same- how similar enough can be considered "same"). 

I submit that this is also a challenge from a UI perspective in web applications. If you don't have a list-detail UI pattern, it gets very hard to find things. This is frighteningly common in the single page applications, where to get a piece of data you have to go to Dashboard three, left column in mode orange, scroll down 3 items. It requires the user to remember the association between the entity type and which dashboard it is on. The UI becomes incredibly brittle if you want to enable even the most basic hypermedia link of jumping from some reference to the entity to the dashboard in a particular mode.

The detail page for an entity is the page driven by its Identity Field.  It is its homepage. It is easier to find things if they have their own home. The home for the entity type is the list page.  The navigation for the application can reference these list pages, which let you navigate to a particular entity. Again, this seems "mind numbingly simple." However, when you have system that lacks it, it can become mind numbingly complex to navigate. 

Monday, August 17, 2020

Accuracy vs. Explainability Tradeoff

This fascinating article attempting to explain machine learning for statisticians is very interesting. I don't know if I even properly understand it, and the tone is a bit negative. Here's the simplest way I can think of it.

If I have:
    (b1 * x1) + (b2 * x2) + (b3 + x3) + a = y
the statistician is trying to minimize error in the b values and the machine learning person is trying to minimize error in the y value?



If you get the stats "too correct", the ML guy will know you are overfitting, and model will do worse on new data that was not in the sample/training set.

Is that it?



Saturday, August 15, 2020

The Twitter for blogs

 Thinking about whether there is a way to beat Twitter with open standards. The day Twitter shut off RSS (so that people couldn't build apps that avoided their ads) we lost a huge function of it as a part of an aggregation system. However, tools like NewsBlur show it can still be scraped.

Spotify is trying to create a walled garden around podcasts. Again, shutting off the aggregation function.

Google Reader was very popular. With some additional tools to create self-hosted content, likes, and some additional discovery tools, it could have been a distributed twitter.

We may be able to do this.

Monday, July 06, 2020

Lowering the bar

I have another post sitting in the dock, but I can't really release it because I need to think about it a bit more. I've been trying to lower the bar yet again so that I can produce some shorter things, but I still don't want to put out things that are wrong!

Fred Wilson at AVC https://avc.com/2020/07/short-and-sweet/ has posted about how he has been getting shorter with his blog.




If shorter is good enough for Fred, it's good enough for me.

Short content seems like Twitter. I just am not finding Twitter rewarding lately. It seems like the Cultural Revolution on there, just people spitting at each other. Not pleasant in any way. I was trying to keep a list of just silly funny stuff in my approach to using lists as the way to consume Twitter, and only following people I know personally and am thus less likely to be sickened by. I just can't stick to it properly and keep following people.

Anyway, things are so ugly on Twitter I don't even feel like linking up these posts there.

Sunday, June 28, 2020

Thoughts on David Patterson and Lex Fridman

I was watching the fascinating Lex Fridman discussion with David Patterson and found it very stimulating. It really made me think about the nature of computer science as a whole in a way that I haven't for a while. David Patterson is just so clear when he speaks about things that it makes you consider the basic nature of the endeavor. It reminded me of so many things I used to think about more frequently.

https://www.youtube.com/watch?v=naed4C4hfAg

Some random notes or ideas that occured or recurred to me. Not necessarily in the discussion, but where my mind went...

  • The concept of layers of abstraction in computer science is simply fascinating. It is really possible to keep diving down deeper. It reminded me of when I was in my first electronics class and I was asking about how AND and OR logic gates worked. That wasn't really the point of the class, it was more about figuring out how to assemble those parts to make an adder or do something useful. The teacher didn't even really know or understand my question (or why I was slowing down the class by asking it). Some other student volunteered that were must be some kind of thing inside of the gate that would close or open based on the total voltage. It took me a while to realize this word I had heard so many times, transistor, was really how these were made.
  • I was then drifting to thinking about the intersection between philosophy and computer science at that most basic level. Not only are the logic gates abstractions for the transistors used to create them, the gates themselves are assembled into things like adders, that compose arithmetic operations from boolean logic operations. This ties to (analytic) philosophy a bit, when you think about how Frege was talking about Peano's axioms, and Russell and Whitehead trying to derive mathematics from logic. It took me back to a semester when I studying Wittgenstein and Computer Architecture (two separate classes) and kept sliding to how Frege's types and type systems in programming languages are similar. I wonder what kind of programming language Wittgenstein would have created- maybe it would have a been a game...
  • Of course, compilers themselves are a form of translation from one language to another. That's always interesting to think about. I really didn't quite grasp how Intel x86 architecture translates from CISC to RISC in the chip, just in time.
  • The idea of compilers then got me thinking about the idea of interfaces as mediating between layers of abstraction. The layer of abstraction can be thought of as an interface. It defines how to interact with that layer, but provides this additional opaque box capability of being able to swap out the implementation without changing the other layers. 
  • Some more prosaic thoughts on the idea of the best technology not always being the market winner. How much I hated Elasticsearch from a pure attitude and marketing perspective, their deceptive ways  How I felt vindicated when they went away from open source. How much I hate their current market position and hope AWS crushes them. 
  • Just the discussion of how ARM has so effectively pursued RISC, and how Apple is making the software stack integrated with the hardware so effectively, I wonder when there will be a true open architecture phone with mass adoption that can compete. Amazing that Apple is moving to this for their desktop operating system. It seems crazy, but the possibilities are huge.
  • Finally, the chat on mlperf was so interesting. I didn't realize Nervana had refused to release their mlperf scores prior to the acquisition by Intel and their tech has been abandoned. Meanwhile Intel acquired Habana Labs, partially because they did have good mlperf scores. Benchmarking is so important. It made me want to benchmark things I am doing!
Okay, that's about it so far, but I am going to try to start recording my impressions like this more often. Probably won't be able to keep it up. But who knows?

Transitory technologies

Most of the JavaScript frameworks seem to be transitory technologies. They exist to fulfill some perceived gap in the web browsers' capabilities. Once those gaps.are filled, their reason for existing fades.

Probably the two strongest pieces are Web assembly and Web components. We also have webgl. It seems hard to imagine that a virtual DOM and another event model on top of the event model that already exists, are the right way to deal with these technologies.

The only question is how long it takes.

Saturday, June 20, 2020

Text as Design

I am not opposed to Design. I do think there are a lot of advantages to text as opposed to image heavy design, but I see both sides of both sides. There was a trend for a couple of years to have website home pages be huge, highly detailed, saturated images, or even video. The idea there seemed to be create this feeling of lushness or richness. This has obvious costs- bandwidth, latency, etc., but there is also something decadent about it. We have all of this bandwidth, let's use it. A picture is worth a thousand words. However, these are fleeting feelings that leave me empty. I need the words to create a rational idea out of the picture.

I need the command line.

I love the episode of the Netflix series on design "Abstact" that featured Paula Scher. https://www.netflix.com/watch/80093802?trackId=14277283 There is a lot you can do with typography. I do see evidence that people are coming back to text favored design.  https://cheapskatesguide.org/articles/beauty-of-text.html.

I am sick of fighting over which icon to use on a toolbar when a word would do-- although this does bring another battle. Which word? Which language? We live in a world where we fight over words. Do symbols let us agree to disagree? Do they let us put our own interpretations into those symbols, while disagreeing under the surface about what they mean?

I kinda want to switch from using blogger to get more fonts.

Wednesday, June 17, 2020

Purist Programming

Programming is about making things. Purity is about adhering strictly to a set of rules. It seems that one should favor the programming over the purity. Abstractions should be minimal.

Think about this thing from John Cook on "Pretending OOP Never Happened".

That has been my experience. I hardly ever write classes anymore; I write functions. But I don’t write functions quite the way I did before I spent years writing classes.
And while I don’t often write classes, I do often use classes that come from libraries. Sometimes these objects seem like they’d be better off as bare functions, but I imagine the same libraries would be harder to use if no functions were wrapped in objects.
Cook then quotes James Hague's "Follow-up to "Functional Programming Doesn't Work":
100% pure functional programing doesn’t work. Even 98% pure functional programming doesn’t work. But if the slider between functional purity and 1980s BASIC-style imperative messiness is kicked down a few notches — say to 85% — then it really does work. You get all the advantages of functional programming, but without the extreme mental effort and unmaintainability that increases as you get closer and closer to perfectly pure.
Getting to the point of it, the pure functional people are the most annoying people in the world. The world has state. You need state.

I really get annoyed by the programming paradigms in place now on the web, particularly the front end paradigms. There is a very simple event driven model that works surprisingly well. The component is an object, with state, that responds to events...anyway probably worth exploring the world of JavaScript separately.

Sunday, June 14, 2020

GPT-3

I am not sure if people who haven't been doing machine learning can appreciate how weird GPT-3 is or what "few-shot learner" means.

In normal NLP machine learning, you might start with a pre-trained language model that encodes the relationships between words of a language. You then build a training set of data, probably 1000s of items where you have labels applied to text, correct translations, answers to questions, and things like that. You then train the model to minimize error on that training set. Then, with the trained model, you send it new samples of text and it spits out a label, translation, or answer as appropriate.

With GPT-3, a much bigger language model, trained to just predict the next word, you don't have to do any of that. You construct the whole task in the last bit, where you would normally be sending a trained model new samples of text. The trick is, you send in a description of the task in with the text. So you could send in:

 translate from English to French: hat => chapeau, cat => chat, hello => 
and it would send back "bonjour".

It learned enough about language to be able to have examples of what typically follows "translate from English to French" to be able to get good performance on that task. This wouldn't be surprising if it had been trained on that task, but there is no task specific training (aka fine-tuning). It was just trained to predict the next word. Having a big enough model (a mind boggling 175 billion parameters) it just picks up that whole task as a pattern.

Read the paper.

Wednesday, December 04, 2019

Decisions

Thinking about decisions

Check out this product from Osparna that supports investment diligence and decision-making. The idea is that we should take a systematic approach to decision making around investments. Rather than only base them on our intuition, we decide what the basis for a good decision is, and then gather the information. There's a lot more to it...