They couldn't compete in terms of high end performance for a long time. But nowadays they even lose the price-performance ratio of entry->mid level GPUs.
Not really. Entry->Mid level is RX 560, RX 570 and RX 580. RX 560 is a mixed bag, but both RX 570 and RX 580 are competitive and a bit superior to the Nvidia alternatives - as long as they are not overpriced, which they were till early this month. So the market just has to reach the regular prices AMD aimed for initially for those series to work fine.
But one can be concerned about the utter failure of high end Vega.
I personally have been running a RX480 8GB since the day they were released and I haven't hit a game that can't run at an excellent framerate even for my 144hz monitor, so I don't have a concern yet :) It was a amazing deal for just $263.98 at release. From the looks of the benchmarks the Vega 64 is leaps and bounds faster as well.
As far as VR goes, I think they're in the early stages still -- they need to improve the lenses, resolution and refresh rate before I will be buying in so I don't yet need the VR frame rates. At the current level I'm sure the Vegas would handle VR fine though since it seems to outperform nvidia slightly.
Ive got an RX-580 in my gaming desktop. Couldn't be happier.
And the integrated Vega graphics in their Ryzen mobile processors are nothing to sneeze at. Getting performance comparable to some of the low-end nVidia cards.
I can only speak for myself. But in now 7 years I never connected a keyboard or mouse to my 2011 MBA. It's keyboard+Touchpad is just so good that I don't need it. Even when I connect to an external display.
I'd say no. In my experience other Companies aren't even able to recreate the smooth surface + 'clicking feel' of the Apple touchpad, which is what I value the most. I suspect that they can, but aren't legally allowed due to patterns.
I wouldn't care about the touch bar if the esc+(sleep key) were still regular hardware keys. Those are the only ones that I actually use.
But an even bigger problem for me is the price increase. The base TB version costs 200€ more (or 400€ if you'd buy the nTB 128gb version). With ne nTB version not getting the update, the updated MBP is too expensive compared to non Apple machienes.
As an aside, I switched Caps Lock to ESC a few years ago and after the 6 weeks(?) or however long it took to fully adjust, it's fantastic. I wish I could bring Caps Lock -> ESC joy to everyone.
I've done the same but Caps + key to Hyper (cmd+ctrl+option+shift), just to pair with custom shortcuts.
But I really love using it for escape. I'm on a 2014 model but not going to the corner for escape is great. Plus I'm about to get one of these new ones, and I feel prepared since I have zero usage of the top row (other than the special functions).
I believe the price increase is purely based on the profit-maximizing strategy. If people will pay that much then Apple has no reason to ask for less. Unfortunately, I can not imagine myself switching to a Windows-based laptop. This is just another example of why monopoly is bad.
I agree that this is profit maximization, but the only monopoly Apple has is on sucking less than others at particular things that particular niches care about (privacy, UX, ecosystem integration come to mind).
A non-Apple device will do all the same stuff. It just might not do it the exact way you like out of the box.
I pay the Apple tax (albeit exceedingly infrequently) because I can afford it & it reduces friction in my life, not because I have no other options.
If it's any help, I found this tool useful: https://www.haptictouchbar.com/ (I'd be surprised if Apple doesn't build this into the Touch Bar at some point.)
For those who are wondering, it's creating a haptic bump via the trackpad. You'll feel it right there if you put one finger on the trackpad and then touch the touch bar.
It's a good enough effect, though, I will probably keep it. Thanks for the suggestion.
How advanced are those student in their studies when they take this course?
Their work is probably good but I can't help but think many of the reports/posters seem underwhelming. I doubt they would be accepted at the universities I know.
I like both Swift and Tensorflow. But how is that going to work?
As far as I understand MacOS has no official Nvidia support (=> no Cuda), which is (at least) advised if you want to use a GPU for computing. Using OpenCL instead of CUDA would require building Tenfowlow from source. The OpenCL support is not as mature as with CUDA, so I imagine you could run into unexpected (performance) problems.
On windows, you have an excelent CUDA but lackluster Swift support.
Will they add OpenCL as a "first class backend" for Tensorflow or rather expect a "first class" support of Swift on Linux and Windows? Otherwise who is going to use it?
People rarely train ML models on macOS for the reason you mentioned. Most machine learning work happens on Linux, so this should work well there.
TensorFlow supports a standalone server mode where it receives computation graphs and executes them. This is nice because then you can remotely execute on any accelerator (Cloud TPU, multi-worker multi-GPU) from your laptop.
In their demo, they did exactly that with a Cloud TPU: it connected to a TensorFlow server that executed the machine learning training part of the program.
I agree, I just had in mind that Apple just now added/announced support for external GPUs. Besides Image+Video edditing, I though general computing tasks is a use case they had in mind. It's not like Gaming is big on MacOS.
>TensorFlow supports a standalone server mode where it receives computation graphs and executes them. This is nice because then you can remotely execute on any accelerator (Cloud TPU, multi-worker multi-GPU) from your laptop.
Where can I find more documentation on this? I’ve been looking for something exactly like this.
Another format is onnx [0], where Apple and Goodle don't seem to participate. I don't know the politics behind it but there should be a common format for all libraries/platforms.
onnx seems to be an initiative to try to allow Microsoft's and Facebook's AI platforms to compete with Tensorflow.
Considering Tensorflow is more a grab at developer mindshare than an ideal platform [for example, its performance lags by a factor of two behind MXNet and Torch], I think it's a smart plan.
I don't think the last sentence is fair to trensorflow. Torch has been around for ~15 years compared to the 3 of TF. You'd expect TF to catch up in terms of performance in the future.
> A forward() function gets called when the Graph is run.
Isn't that almost exactly the same in tensorflow? You'd run your model to generate an output, or/and run your optimization operation t optimize the model.
> Based on some reviews, PyTorch also shows a better performance on a lot of models compared to TensorFlow.
Citation needed. How good are the examples optimized? What does performance mean? Precision or learning iterations per second?
If it's the later, in which environment? CPU/GPU/distributed computing?
> A forward() function gets called when the Graph is run.
Yes, the idea behind it is the same. The difference: PyTorch has a forward() function in their module class which you have to override, while in TensorFlow you can specify that yourself.
> Based on some reviews, PyTorch also shows a better performance on a lot of models compared to TensorFlow.
Totally of topic but could someone explain Elixirs syntax to me?
In the following code (from the linked site):
What is embeds_many?
Are :string and :map type information or Atoms?
What happens to :changes, Change, primary_key?
Does the code between do and end just call the field function twice?
embeds_many :changes, Change, primary_key: false do
2. Anything that begins with a colon is an Atom in Elixir, including :string, :map, :changes, :field, and :value. These atoms are used by Ecto at runtime to identify certain things. `:changes` is a unique name given to the embedded schema, `:field` and `:value` are field names on that schema, and :string and :map are informing Ecto of how to treat certain values.
3. `Change` is the name of a Module which describes the Schema we are embedding
4. `primary_key: false` is a configuration option
5. Everything after `do...` is describing what the embedded schema would look like. So it has a `:field` field that is of type string, and a `:value` field that is of type map.
Hopefully that makes some sense... I'm not totally clear on it either (having not used Ecto before) but that's what I'm grokking.
This code snippet is a little bit confusing if you aren't already familiar with Elixir, because most of the syntax you're seeing is actually from Ecto's [0] DSL. Calling "use Ecto.Schema" at the top of the module brings some extra functionality into the current scope. For example, embedded_schema is a macro exported by Ecto.Schema [1].
embeds_many is another such macro [2], that allows you embed another schema, in-line, into the current schema. This is contrast to your more common has_many relationship, which references another table entirely.
Here, :string, :map, etc are atoms that are being passed to the field macro [3], to define the schema for the table.
Others provided an explanation of the Ecto DSL, I just wanted to add the explanation of basic syntax. It goes like this.
In Elixir, the most basic function application looks fairly standard:
func_name(arg1, arg2, ...)
But the parens are optional, so you can write:
func_name arg1, arg2, ...
as well.
Next, in Erlang and Elixir function arity (number of arguments it takes) is always fixed (i.e. no varargs or Python's *args). Because of this you generally have to pack your arguments in a list; functions like `printf` work that way:
iex(2)> :io.format("fmt ~p foo: ~p bar: ~p", [1, 3, 4])
fmt 1 foo: 3 bar: 4
That takes care of (positional) varargs. There are some other functions, however, which would benefit from "keyword arguments", which also isn't directly supported. To get around this, you can take a list of pairs (keyword, value) as one of your arguments (usually last):
Where `{}` creates a tuple and `:` creates an atom (called symbols in Lisp), which you can think of as a special[1] string value. This pattern is so common, that Elixir provides a syntactic sugar for it. The above is equivalent to:
Edited to add: Capitalized names in Elixir are actually special atoms, mostly used for naming modules. It works like this:
iex(5)> :'Elixir.Change' == Change
true
There is more going on because of the fact you can alias module names, but mostly you can think about this as (another) special syntax for atoms.
[1] Allocated once and cached. When working with normal strings:
a = "Asd"
b = "Asd"
you have no guarantee that `a` and `b` point to the same thing in memory, while with atoms you do have that guarantee. This makes comparing them efficient.
They couldn't compete in terms of high end performance for a long time. But nowadays they even lose the price-performance ratio of entry->mid level GPUs.