Sadly, the output of TtH can't be pasted into WikiPedia pages directly because it has nested tables. The best of all worlds would allow <tex> ... </tex> tags that would indicate that the intervening text needed to be expanded by TtH before insertion. This would allow folks like me to type math the way Knuth intended.
I hate to spoil the party, but TtH does not produce valid HTML. It produces HTML that relies heavily on non-standard fonts used by Microsoft Windows, and that will not display correctly on other machines. Perhaps if they come up with an improved version that uses proper HTML entity references it would make more sense to use here. The author recognizes this, and is rather dismissive of the problem. I sympathize with his point of view that the standards are behind, and a working non-conformant product beats a theoretically correct but non-working one. But nonetheless, I think a forward-looking project like Wikipedia should encourage forward-looking approaches (i.e., obeying the clean standards that browsers haven't yet fully implemented) rather than kowtowing to pragmatic approaches that will break in the future. -- LDC
Hmm.... it isn't hard to get a Netscape to display the correct characters on other operating systems. I agree that it would be nice to use Unicode based standards, but having a standards based solution that just doesn't work isn't very helpful either. Dismissing a 90% solution because it is not elegant and thus leaving yourself with NO solution isn't very productive either. It should be noted that the author of TtH has said that he would support better solutions once they exist.
Would you care to suggest an alternative? -- TedDunning
I really question that "90%" figure. Does the code produced really work on Macintosh and Unix machines? If it does, then I'd be happy with it in one form: if the internally stored, editable text of the page (i.e., what I'm editing now) were stored in the original information-preserving TeX format, and then converted to HTML by TtH at display time, then the non-standardness of the display form isn't so bad, and can be fixed later with new software when available. I just don't want the bad HTML to be stored here.
The other options are image-producing translators and Java applets. Those certainly have drawbacks (such as dowload time and resolution), but they are very standard and they do work well. -- LDC
My suggestion is to do just that (display-time rendering of TeX into TtH). That was why I was so exercised about your dismissal of TtH; without some sort of late-rending, we lose the original TeX and start getting all kinds of goo in pages. See my Statistics/LLR page for a great example of nascent goo. -- TedDunning
Yes, that's clearly the way to go, and my "dismissal" of TtH was most specifically directed at the idea of pasting the HTML from it into Wiki pages. I'm all for
<tex>...</tex>, with perhaps a simplified internal syntax, which will allow us to change rendering methods anyway. I'd personally prefer
<mathml>...</mathml>, but I know that's a losing battle for now. --LDC
Ahh ... my 90% figure, btw, was based on the fact that almost all viewers would be able to read the results (Netscape 6 is the only broken case that I know about). As you say, as solutions are found, the rendering can be improved if we can only find a way to render TeX somehow. -- TedDunning
That doesn't answer my question at all. Does this work on Macs and Unix boxes, or are you assuming (for some reason I'd also like the source of) that "90%" of readers have Windows?