Hacker News new | past | comments | ask | show | jobs | submit | tracyhenry's comments login

Another one I really liked is Berkeley CS182: https://cs182sp21.github.io/

The youtube playlist is here: https://www.youtube.com/playlist?list=PL_iWQOsE6TfVmKkQHucjP...

Prof. Sergey Levine is REALLY good at explaining the intuitions of DL algorithms. This class also includes lectures on ML basics and very approachable assignments.

Many classes/blog posts start with describing what a neuron is - that IMHO is a super terrible way to teach a beginner.

To understand DL, one should know why we need activations (because linear models are not enough), why we need back-propagation (because we are optimizing a loss using SGD). This class is very great at explaining those things in an intuitive way. Following through I felt I built a pretty solid ML/DL foundation for myself.



The title missed one important piece: employees who don't choose remote work are required to come to office at least half of the week (i.e. three days). This is a less of a good news for most people.

Surprised this isn't higher, that's basically "back full time" to me but with a shorter weekly period. Very different than what I understood reading the headline.

Note that that's starting in October at the earliest, depending on how/when your local office reopens up. The Menlo Park campus is only "10% open" right now, so the only people in the office are the ones desperate to get out of the house, and we're not expecting the office to be 50% open until September. There is plenty of notice and time for people who really care about working from home to either apply for remote work or slowly adjust back to office work.

This is great news for some, but not for most. What will happen is that most employees will choose to return to onsite work because of the perks and deeper connections with colleagues.

If most of your team is not working remotely - let's be honest - it'll be hard for a remote worker to not feel disconnected with the rest of the team.


This is a novel idea. Congrats on the launch!

When I hear collaborative writing, however, I think of OverLeaf, which tons of researchers I met use for writing LaTex collaboratively. Does Curvenote support Latex editing out of the box? How can you make them transition to your platform if their workflow isn't data heavy?

Btw - I personally am not very happy about OverLeaf. Its UX can be improved in various ways but seems lacking enough development support.


(Overleaf co-founder here.) Thanks for the feedback --- I'd be happy to hear more about what you would like to see improved, either here or via email (in my profile).

And to the curvenote team: Congrats on launching here! Happy to chat about collaborative scientific writing any time :)


Thanks John - I did not expect this comment :)

The biggest complaint I have is: I couldn't navigate between different files using some keyboard shortcuts. The only way is to use mouse clicks, which can be quite slow and prone to errors. In modern IDEs you usually can bring up a file search box with hotkeys. The same goes for switching between PDF and tex.

Another thing bothering me is that the hot-key CMD+Enter for compiling doesn't always work. I couldn't figure out when it works and when it doesn't. When it doesn't, I again need to click on the compile button which is inefficient. I also sometimes use CMD+S, but that saves the entire webpage when the editor isn't in focus.

One other classic UI issue - the only way you can expand a folder in the left side bar is to click on the tiny little arrow. This is too inefficient. A much better way would be allowing folder expansion when I click on anywhere in the row containing that folder.

Despite these issues, I want to say a huge thanks for creating OverLeaf - without it it'd be much harder for me to get my degree. :)


Thanks for taking the time to write this up! Strong agree on better file switching --- also high on my list. I've passed this feedback on to the team.


Thanks! I would love to chat about collaborative scientific writing! I will send you an email following up. :)


In Curvenote's editor you collaborate on the content as you would in something like google docs, without needing to write Latex -- but with the features you'd reach to Latex for; equations, figures, citations, cross referencing etc...

Documents can then be exported as a PDF which uses Latex for typesetting, currently that's with a default template, but we're working on user defined templates right now.

When people's workflow is not data heavy, we think there are other features making Curvenote an attractive place to work; the WYSIWYG style of writing, real-time comments and easy sharing on one hand but also how Curvenote helps you easily reuse, update and build on your existing content.


Google's FNet seems to apply FT to NNs? https://arxiv.org/abs/2105.03824


That paper was one of the most misleading papers I've ever seen in NNs. https://twitter.com/theshawwn/status/1393315603973386240

Like many others, I was hyped when it came out. Then it turns out that BERT with half the param count still kicks their butts in accuracy.

They also only model the .real component and ignore the .imaginary component entirely, which you can't do and expect good results.

But, FFTs are so cool and under-explored that I'm sure they'll be making the rounds in NNs soon. There are lots of advantages to frequency space representations.


I don't understand why the first operations of a CNN are FFT-based decomposition into spatial wavelets. This is basically all the first layers are doing anyways. In fact the filters usually learn both an edge and a centroid at a given orientation. You can get both at the same time with complex wavelets.


The statement in the tweet said that they won't sell.


That is as amendable as the policy of accepting payment in bitcoin was - and right now, with Tesla having a nontrivial position in bitcoin and with Musk being an influencer of bitcoin price, it is in the company's interest for him to say that it is not going to sell, even if it reverses that policy tomorrow.


Can you elaborate on the kind of data and queries that SQL is slow for?


Likely the sensor data being stuffed into a "standard" SQL schema -- loads of these bespoke solutions aren't fully dialed in with tools like timescaleDB, or Prometheus for these metrics. Even with slower (eg 240s interval) sensors the data builds up -- and slows the systems (w/o indexes).


The problem that arises with a lot of these "pull data from sensors, pump it into a database" is schemas and data integrity have to be kind of a second-class problem behind storage. When you can't push an update to whatever is ingesting data, and that ingestion tool is also ingesting with an invalid format, you can't just ignore the data (or fix the problem). So your store has to accommodate semi structured and unstructured data gracefully.

I do not agree that SQL is "slow" for these types of problems. I've built a number of systems that support this issue effectively. You _could_ use a tool that has schemaless/unstructured data as a first-class feature, but if your goal is to reduce complexity a Postgres instance is just fine. As with all data projects, indexing is important and needs to be thoughtful (from the beginning). For sensor data, it's also a good idea to think about data retention and removal policies immediately (keep your metrics/aggregates, move raw data to cold storage after a while).


I feel helpless after clicking around trying to find a live demo that I can play with. I have no idea how this tool works by just reading the documentations.

Having an example with code on the side can make it much much more clear what your tool does.


Did you see the demo on their home page? https://sli.dev/


ok this is definitely helpful. I wonder why this website or the demo are not linked in a noticeable place on Github README.

The first thing I clicked in the readme is "Why Slidev?", which brings me to the documentation page that had almost nothing graphical.


Good points, will update the docs. Thanks!


Here's a talk by Arrow's lead maintainer Wes Mckinney: https://youtu.be/OtIU7HsHCE8?t=2731 which I think gave a good overview on the high-level motivations behind Apache Arrow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: