id author title date pages extension mime words sentences flesch summary cache txt andromedayelton-com-3796 andromeda yelton .html text/html 6214 393 75 I point this out as a reminder that the original neural net embeds these 43,331 documents in a 52-dimensional space; I have projected that down to 2 dimensions because I don't know about you but I find 52 dimensions somewhat challenging to visualize. As you already expect if you have done a little machine learning, that means that we need to write cost functions that mean "how close is this image to the desired content?" and "how close is this image to the desired style?" And then there's a wrinkle that I haven't fully understood, which is that we don't actually evaluate these cost functions (necessarily) against the outputs of the neural net; we actually compare the activations of the neurons, as they react to different images — and not necessarily from the final layer! ./cache/andromedayelton-com-3796.html ./txt/andromedayelton-com-3796.txt