You have to pick your poison: build up subscription costs, or build up support costs to maintain your own deployments. Depending on your budget, the number of users, the need, the support contract with the vendor, and your priorities, the costs can go either way. That's the premise that both ___ as a service and ___ proprietary software selling support contracts are built upon.
How many users do you have on that JIRA instance? Guessing it's ~500, which equates to around $32 per user/y, which is actually fairly cheap considering the value add the developers would get out of it.
Fuck all extra value. 395 seats for nothing. Not only that, it's a pile of crap. It's slow, unreliable and requires so much configuration to bend it into an acceptable shape that it has cost probably 20 days' work on top of that and requires someone to spend 2-3 days a month on it. It has also gone down twice due to workflow failures and indexing bugs which have taken a day to resolve bouncing back and forth to support (they are actually quite good). We also integrate with Crucible which barely works on a good day and we've resorted to restarting it twice a day to stop it leaking memory and hanging. Plus it takes 9 whole days to index our repository after an upgrade!!!
And we can't go hosted because X isn't available and Y isn't available. Also hosted LOST a load of people's data a year or so ago.
Believe it or not, including the pricing strategy and administrative overhead we could have wrote our own tailored software for less after 3 years.
Bad product selection on our part, but despite the marketing and general buzz around Atlassian being the opposite, it stinks as a platform. Nothing but pain.
Yet enough people think it's a C compiler that if you try to use avant garde C99 features in your C project, someone will invariably complain that MSVC doesn't build it.
Current best practice is actually to draw the black boxes, print it, then scan it back in. The result is crummy quality pdfs with no metadata or hidden text.
I think that was still a year or so before news stories started coming out about government "redacted" documents that were not actually redacted.
A lot of the NSA-related documents released after redaction were certainly run through the "redact,print,scan" routine, and I know the Navy does the same as a best practice now.
Maybe that hasn't made it out to the defense industrial base by now, but I'd be surprised if 9 years worth of being beat about the head regarding redaction mistakes wouldn't have fixed things even there.
Oh, there's tons of much better methods that would be more than good enough. Adobe even provides specific redaction features that drill down within the PDF data itself to make sure that nothing gets drawn under a redaction block, it works quite well.
But the government is about making procedures that are both easily understood by low-paid civil servants and unlikely to be screwed up in major fashion (e.g. our GS-7 scanning documents back in might use the wrong file name or reverse the pages or something, but they probably won't leak classified data if they do that).
As someone in the UK, I'm quite happy for amazon to provide me with low cost items on this basis. The tax is fixed rate VAT here in the UK so everything I buy from a tax dodge means I'm effectively getting a small rebate against my nearly £3375 a month (including income tax, NI and VAT) contribution to the state every month versus spinning another 20% into VAT.
If you want quality try bytemark/bigv or Colo with exonetric.
Edit: I tried linode and was pretty happy but I don't trust something so far from home. Also tried digital ocean but the lagging kernels shot it for me.
What are your OVH troubles? I was thinking of getting a couple of their dedicated Infrastructure line of servers, so if you have relevant information I'd very much like to know.
a) markdown isn't expressive enough. In fact it's pretty horrendous to edit and separate content and layout IMHO. I'd go as far to say I prefer docbook over markdown. It's fine for github readme's but not typesetting.
b) the problem is complicated. Complicated problems need lots of lines of code. 100kloc isn't much. The thing I'm working on has 5.2Mloc and took 25 people 20 years to write. To the layman it probably looks like it does less than this typesetter meaning it's a bad metric to use.
My MacTeX installation is about 1300Mb if that's any gauge.
Or, it could be that TeX has been cobbled together over decades with little design guidance or vision, resulting in a mishmash of approaches and inefficient code.
To me almost everything in TeX feels like a hack, even if the fundamental idea is good and the sum of all hacks ends up looking pretty good. The fact that it's a huge hack that takes 1.3GBs to install is not a gauge of quality.
It only takes 1.3 GB if you install every package ever invented. A normal installation of TeX Live takes about 200 MB and includes all the binaries (including tex, pdftex, xetex and luatex), a full LaTeX and ConTeXt and a whole bunch of other libraries.
I usually just install that, and then tlmgr any missing packages instead of downloading a gigabyte of TeX code I'll never need.
The full TeXLive is 1.3GB large, because it ensures that the only thing you need to build any semi-sane TeX document is this installation.
Describing TeX as Code misses the topic by a wide margin, a lot of the things in the distribution are fonts and compiled versions of the documentation.
Bash the idea of treating TeX as code out of your head and you will find it quite a bit saner.
> a) markdown isn't expressive enough. In fact it's pretty horrendous to edit and separate content and layout IMHO. I'd go as far to say I prefer docbook over markdown. It's fine for github readme's but not typesetting.
I think they're all pretty poor. Sure they're a reasonable choice for conversion to HTML but they're only a subset of that and HTML is pretty easy to write and separate style and content these days.
The point of these compile-to-html languages (eg. markdown) is that you're abstracting away from the display formatting as much as possible.
It's easy to say that you can just use HTML as content-only and style it using CSS, but the reality is that HTML is primarily a display format; it's pretty much unavoidable having classes, ids & DOM node structure in HTML that govern display behavior.
By abstracting concepts (paragraph, heading, list, etc) out, you can render them as components however you like into the HTML. Which means if you decide to change how they're rendered, you can do it all at once by editing the template.
Markdown & it's kin are certainly not perfect, but I'm pretty sure writing directly into HTML for your content is a generally really terrible idea.
(that said, with web components we will be able to do this directly in HTML by creating custom data-driven tags, so maybe that's the future. ...but it's not quite here yet)
Show me the part of Markdown (and its kin) that allows me to distinguish why the text is appearing in (or, rather, marked up in a manner that normally translates to) italics or bold (hint: you can't assume stress, and simply holding to current conventins is only good for ephemera). Where do the language tags for foreign words and phrases go? Are asides, callouts and infoboxes really the same things as blockquotes? HTML may not quite be SGML, but it is an awful lot richer than Markdown (and its kin) even if no attention is paid to styling. (I haven't used a class or id whose only purpose was styling in a decade. They either have a relationship to the document structure or they don't exist.)
True but I listed two options available which aren't as bad as Markdown. There isn't even one Markdown but several extensions as far as I can see and Markdown looks like a proof of concept sent to production and later extended at various customer sites.
I have been studying AsciiDoc lately. It turns out to be a really hacked-together macro language for creating "ad-hoc lite markup" to "SGML/XML-based semantic markup" conversions. The core idea and the default ad-hoc lite markup are both pretty nice, but the implementation leaves a lot to be desired. AsciiDoctor is an alternative implementation which is used on GitHub and in other places, but I believe it interprets AsciiDoc as more of a "fixed format" like Markdown than the original reconfigurable macro-based approach.
Anecdotally lots of people are using Markdown and Pandoc to get from lightweight files to decent PDF or HTML.
Ok, you're not going to use it if you need actual typesetting, for that just use TeX and be done with it. What we're seeing I think is the same type of backlash that spawned YAML from XML.
Funny thing is, the actual space for .docx or .odt could be shrinking, is it?
Docx and Odt will live forever. 99% of the population will continue to use them. Hell I write hundreds of pages of documentation every year in Word 2010 and use LibreOffice at home because its easy. I use LaTeX for anything that is 'published'.
I hope we don't get another YML, an inferior and minute subset of XML's capabilities with all its problems and none of its advantages and tooling. I've used XML for years and once you understand it properly it's fine.
Call me old school but I find lightweight markup to be a hack job.
> Call me old school but I find lightweight markup to be a hack job.
The best way to think of it is that markdown is a replacement for .txt. It's not a typesetting language, it's just a conventional way to structure text documents.
Most of the elements of markdown are already there in .txt files from the 80s and 90s. All markup does is standardize how to do headers, lists, code blocks and bold/italic so that you can generate documents that actually contain those elements, but could also just treat a markdown file as a normal .txt.
I really like starting documents where I write most text with Markdown, but write out any equations with LaTeX syntax. Pandoc works great for this.
Invariably, if I work on the document for any lengthy period of time, I end up giving up on using Markdown. I'll generate LaTeX output using Pandoc and just start editing it manually.
It depends how you work and how stable your product is. I think they are patch heavy i.e the CVS tree is used to apply patches to and that is that. CVS has some advantages then as you can pin individual file versions on a tag rather than having to fuck around merging things. Individual files might be ethernet drivers, scheduler etc because TBH the BSD codebase is a hell of a lot better decoupled and modular than most things out there.
My wife's grandfather designed buildings like that in the 1960s for the Ministry of Defence in the UK. They had special wall and ceiling coverings, artificial floors and were lined with RF shielding even back then. They had an immense list of engineering checks that had to be done right down to light transmission.
One of the really cool things is they had developed a method of determining what keys were pressed on a typewriter by the sound back then (these were IBM ones so this wasn't hard) and had to design ways for the sound to be dampened enough to not reveal it outside of the communications rooms. Even things like heating pipes were carefully dampened.
I have all of that and pay nothing. It builds up otherwise very rapidly.
For example the company I work for currently churns out a mere $16k a year for JIRA now that the pay pay pay has caught up...