Chat:World/2021-07-12

From CG community
Jump to navigation Jump to search

Default avatar.png anon325671: hello

Default avatar.png devlogs: .

Magus: incoming next : CSB with tactical nuke

BIPIN_THE_KING: hello

Default avatar.png Alex_Prog: Hi

AntiSquid: how do i remove timelimit locally from SDK ? forgot

AntiSquid: nvm found something

AntiSquid: nope that wasn't it, damn it ... :/

Default avatar.png binjn: hi everyone

Default avatar.png binjn: does anyone know a pythonista community

Default avatar.png FriendlyKh: let me see

Default avatar.png FriendlyKh: Discord is an option

Default avatar.png binjn: oh thx

KatMan911: is it just me, or CG just slowed down HARD?

k-space: basically not working at all for me

69razer69: its working fine for me

k-space: seems back now

Hunter64TheOne: fr

Hunter64TheOne: #fr

KatMan911: quick question - if I got out of the chat room for the current COC, how would I get back in to talk to other guys?

Wontonimo: know their user names

Wontonimo: and pm them

KatMan911: Ah, yes.

KatMan911: any way to get the COC room name, though? The one that you automatically get into when COC starts?

Wontonimo: if you followed them, then they will show up in your following list in your profile

Wontonimo: it has a name and if you remembered what the room name was you can use the # to get there "#roomname"

Wontonimo: in the chat. Actually, you can use that to make arbitrary rooms like #littlemushroom

Wontonimo: or #fr

KatMan911: so everyone'd see what I'm saying, without any need for PMs

KatMan911: yeah, that's the thing - they're all "COC<insert mchuge number here>"

KatMan911: oki

Wontonimo: type /help here to see some other options

KatMan911: thanks

Wontonimo: :thumbsup:

Hunter64TheOne: #fr

Hunter64TheOne: il y a des fr

Hunter64TheOne: ?

Hunter64TheOne: #fr

Hunter64TheOne: 000000000000000000 0000000 000 0000000000

Hunter64TheOne: pourquois on m'exclue?

Astrobytes: English in World please, talk French in #fr, don't spam 0's

Hunter64TheOne: dacodac

Hunter64TheOne: yes

Hunter64TheOne: yes

Hunter64TheOne: oui

Hunter64TheOne: yes

Astrobytes: Do you want a ban?

Hunter64TheOne: no

Hunter64TheOne: mais pourquois

StepBack13: lag in clash? or just me?

Astrobytes: Someone else complained on discord so I'm guessing it's running a little slow.

Wontonimo: when CoC is lagging, play a multi like https://www.codingame.com/multiplayer/bot-programming/tic-tac-toe/discuss

Wontonimo: when CoC isn't lagging, play a puzzle like https://www.codingame.com/training/easy

Astrobytes: Good shout :)

Default avatar.png sBlip: hello world

KiwiTae: Astrobytes o/

Astrobytes: hey KiwiTae :)

AntiSquid: someone can talk in chuck norris encoding :O

Default avatar.png N413: wow

Default avatar.png N413: so cool

linjoehan: AntiSquid: yes

KiwiTae: AntiSquid yes in #chucknorris room

Westicles: ugh. CN is impossible if you suck at both bash and perl

Lucky30_: anyone know if it is possible to build a nodejs api using a script in package.json

Lucky30_: if you know about it, please message me in private so that i can see your reply even after i log out

Default avatar.png nthd3gr33: i have been rank 1 in wood 1 league in coders strike back for a while now

Default avatar.png nthd3gr33: any ideas on why i'm not getting promoted?

Counterbalance: battles: 96%, nthd3gr33

Default avatar.png nthd3gr33: what does that mean?

Counterbalance: that it wasn't done yet.. but now it is

Counterbalance: so you should get a promotion soon

Default avatar.png nthd3gr33: Oh, I just got promoted

Default avatar.png nthd3gr33: thanks

Astrobytes: congratulations

Default avatar.png nthd3gr33: collision time!

Wontonimo: congrats nthd3gr33

Wontonimo: hey nthd3gr33, anyone tell you about the -3v heuristic for Coders Strikes Back?

Zenoscave: How long has dbd_r been 1st?

jacek: dbdr? dunno

Zenoscave: dbdr when did ya get the lead on euler?

Westicles: ah good, the gang's all here so we can talk about CN Bash

whatevericando4you: Failure Found: 2End of line (\n) Expected: Nothing

Zenoscave: CN Bash?

Westicles: chuck norris

Zenoscave: did they change validators?

Westicles: nah, just wandering if you really did bash or if it is perl or ruby wrapped

Zenoscave: perl

Zenoscave: well some bash

Westicles: yeah, that's what I thought. I'm trying to do it all in bash but this xxd stuff is too messy

Zenoscave: lol I have a full bash one let me see what it's length is

jacek: the suspense is killing me

AntiSquid: he lost count :P

Westicles: the answer is you need to learn perl, and I have no interest in doing that

Wontonimo: i knew perl. i used to teach it.

Wontonimo: that was a couple decades ago

Wontonimo: so it is safe to say i no longer know perl

Astrobytes: perl5 is perl5

Zenoscave: 92

Astrobytes: (as much as I can tell)

Astrobytes: pewpew

Zenoscave: pewpewpew

Wontonimo: Programmers Extraction and Reporting Language ... or something like that

Astrobytes: Practical Extraction and Reporting Language

Zenoscave: Eight Megabytes And Constantly Swapping

Astrobytes: By Larry Wall

Wontonimo: yeah, that sounds right

Astrobytes: First scripting language I became proficient in, circa 2000-2001

Zenoscave: a.k.a. Escape-Meta-Alt-Control-Shift. the hot-key needed to quit after it thrashes

Astrobytes: lol

Wontonimo: first one i learnt around 97

Zenoscave: 2008 for me Started with VB6

Astrobytes: scripting though

Zenoscave: I think it was 6 might've been a different one

Zenoscave: Astro hows your chuck

Astrobytes: VB6 was like about a decade before?

Astrobytes: My what?

Zenoscave: Limited resources

Zenoscave: chuck norris golf

Astrobytes: Hmm, lemme look

Husoski: Actually, Perl is just Perl. The acronyms came later. Originally intended to be "Pearl", but there was another Pearl language.

Zenoscave: interesting

Astrobytes: best is 103 ruby, must've spent a lot of time on that one

Astrobytes: and isn't it just 'perl'

Husoski: Perl 6 is now called Raku. Wikipedia has a history section in the Perl article, and another on the name.

Astrobytes: Yeah I know about Raku

Zenoscave: 103!? Get working astro

Astrobytes: Golfing isn't my favourite thing, neither is ruby :P

Husoski: Just puttin' in my $0.02...what's that in Euros?

Zenoscave: I'm similar. golfing xor ruby is horrible.

Zenoscave: either golf in ruby or don't golf and don't use ruby

Astrobytes: No idea Husuki :D

Astrobytes: Husuki? Husoski

Zenoscave: Bueller? Bueller?

AntiSquid: it values less in Euros

AntiSquid: and probably would be rounded down to 0.01

Westicles: around 10 bob, fits right in your nose

Zenoscave: 10 bobs?

Zenoscave: Is this a currency?

Westicles: according to the beatles, yes

Zenoscave: I don't know how to process this. that's enough CG chat for a day

Astrobytes: 10 bob was a 10 shilling note? I think. back in the day in the UK

Zenoscave: Ok that helps I'll stay now

Zenoscave: got my bash down to 90

Astrobytes: bashing your bash

Zenoscave: mhm

Zenoscave: til it bashes no more

Westicles: you already have a 72 lol

Westicles: or 76 actually

Zenoscave: it's not really bash though lol

Marchete: PoZero

Marchete: https://twitter.com/robinphysics

Marchete: https://twitter.com/robinphysics/status/1283475087740612608

Marchete: sorry that link

Marchete: I don't understand anything

Wontonimo: from the article that the link references:

Wontonimo: "With this insight, we propose a variant of AlphaZero which uses the exact solution to this policy optimization problem, and show experimentally that it reliably outperforms the original algorithm in multiple domains."

Astrobytes: interesting

Wontonimo: basically, they are saying that MCTS is an approximate policy and they are proposing the exact policy optimization

Marchete: and this is when I got lost

Marchete: I don't understand the notation

Marchete: like at all

Marchete: very little info about it, considering they claim a much better performance than Alphazero and muzero

Marchete: https://slideslive.com/38928110/mcts-as-regularized-policy-optimization?ref=speaker-24115-latest

Marchete: pasted for later use

Marchete: I hope https://cg.spdns.eu/wiki/Main_Page is working :D

Marchete: https://arxiv.org/pdf/2007.12509.pdf

Wontonimo: I like the background ...

Wontonimo: ". Background Consider a standard RL setting tied to a Markov decision process (MDP) with state space X and action space A"

Wontonimo: oh yes, i can picture it lol

Marchete: in the video they give some samples about alphazero, and nothing about the new algo...

Marchete: I can't understand those hyper mathematical papers, a pity

Marchete: btw wontonimo

Marchete: you know a lot about NN

Marchete: what's better/suitable on small NN? Batch normalization or L2 regularization?

Marchete: ....or both?

Marchete: or none

AntiSquid: 10+ epochs of testing :P

Wontonimo: i really like batch norm so long as you have a large batch.

Marchete: define large

Wontonimo: 64 would be the lower limit i'd think.

Wontonimo: but depending on the implementation, it should blend from batch to batch

Wontonimo: tensorflow batch norm?

Wontonimo: using keras?

Marchete: keras

Wontonimo: yeah, so your batch size i don't think matters

Wontonimo: sorry for the red herring

Marchete: I've tried x=tf.keras.layers.BatchNormalization(epsilon=0.00001,scale=True)(x)

Marchete: nothing different

Wontonimo: yeah, batch norm is just really nice at regularization. try adding it liberally, like between every 3 layers

Marchete: maybe batch was small

Marchete: every 3?

Wontonimo: or 2

Marchete: I added in all :D

Wontonimo: haha ... sure

Marchete: no I mean, where should I?

Marchete: right now I was testing input->Dense1->Dense2-><split>->Policy &value

Wontonimo: "Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1." and you can specify the axis to use

Marchete: I added it on Dense1 and Dense2

Wontonimo: which means, it will output values close to 0 with a STD ~ 1

Marchete: but my weights went a bit higher

Wontonimo: and if the receiving network requires values in the range of -.01 to +0.01 or -10.0 or +10.0, they will struggle

Marchete: maybe I need a bigger batch size

Marchete: I thought it was based on samples, and it's on batch size

Marchete: 128/256?

Wontonimo: wait ... we got 2 threads of conversation going. i'm going back to this

Wontonimo: input->Dense1->Dense2-><split>->Policy &value

Wontonimo: with where to put regularization

Marchete: yes

Marchete: I->D1->Batch->Relu->D2->B->Relu.....

Wontonimo: input->batchNorm->D1->D2->[P,V]

Marchete: hmm

Marchete: before?

Marchete: inputs are one hot

Wontonimo: that's pretty classic, it's like white balancing a picture before processing

Wontonimo: oh, 1 hot!

Wontonimo: nvm

Wontonimo: thanks for clarifying

Wontonimo: if input is 1 hot then dense1 is effectively an embedding lookup

Wontonimo: input drives the embeddings the will be retrieved from D1

Wontonimo: you don't need to batchnorm that

Marchete: ahh

Marchete: good

Wontonimo: let's say you did ... what it would do would be to make all the embeddings have the same distribution

Wontonimo: 0 +- 1 for all values

Marchete: I have no idea, I just saw "batchnorm is good for learning speed"

Wontonimo: in the D1 (embedding) layer

Marchete: the same for L2 regularization

Marchete: but I have no idea when to use them

Wontonimo: input is just 1 1-hot, or a couple 1-hots ?

Marchete: 14x one hots

Wontonimo: how bit is the 1hot

Wontonimo: *big

Marchete: 14x28 more or less

Marchete: <400

Wontonimo: 14 1hots, each 1hot having 28 values. okay.

Wontonimo: then flatten all those values to go to D1

Marchete: they are already flat

Wontonimo: D1 is then a linear addition of 14 embeddings with an activation function after the addition.

Marchete: yes, and I optimized it that way

Marchete: bias + 14*weights from each activation

Marchete: so is faster

Wontonimo: thanks for sharing.

Wontonimo: hmm

Wontonimo: what would batch norm between I and D1 do ... let me think

Marchete: I see it after D1, because it's where you need more normalization

Wontonimo: effectively it would transform your input into more like a probability output, because one of the

Wontonimo: outputs that is very infrequently 1 will be +3 with it fires, and -0.1 when not firing

Wontonimo: where as an output that is frequent may only reach +.8 when firing and -0.4 when not firing

Wontonimo: and so placing batchnorm over I will convert it to a signal amplifier of how frequent that option is seen

Wontonimo: just an idea. i'm not suggesting using it , but that is my intuition

Wontonimo: adding it to after D1 will help D2 learn faster

Marchete: ok

Default avatar.png atomprods: hello

Wontonimo: the rational is that having batchnorm after a layer makes that layer always conform to act within a range, and later layers don't have to readjust to a changing range

Marchete: in fact after batchnorm I had a clipped relu

Wontonimo: and since D1 is taking in more than just 1 1hot, it is more than just an embedding lookup and can benefit from batchnorm

Marchete: 0..1 range only

Marchete: maybe it's not a good fit for batchnorm

Wontonimo: inside your network?

Marchete: I need to make more tests

Wontonimo: i can see that for final output possibly

Marchete: clipped relu is for some quantization methods

Default avatar.png atomprods: I said hello.

Wontonimo: hello

Wontonimo: all values (and connections) outside the clip are effective treated as if block_backprop was applied to them

Wontonimo: and so there is no learning outside the range of 0-1

Wontonimo: i've seen ReLU being used by concating it with the negative of the previous layout pushed through ReLU also. It seems like

Wontonimo: youd have the same info, but it is nicely divided between positive values and negative values

Marchete: https://github.com/glinscott/nnue-pytorch/blob/master/docs/nnue.md#clippedrelu

Wontonimo: and learning can happen in all ranges

Wontonimo: ah

Wontonimo: i don't see a need for batchnorm with something like Clipped ReLU.

Wontonimo: i really don't know how Clipped ReLU would preform !!

Marchete: no

Marchete: I mean

Marchete: I was testing different things

Marchete: I thought regularization or batchnorm could help me to get weights in range

Marchete: like in -2.0 to 2.0 always

Marchete: also because it improves learning and avoid overfitting

Marchete: I need to recheck something else

Marchete: now I'm seeing P1 as always lose

Marchete: when on previous tests it seems to have some advantage

Marchete: thanks for your help wontonimo

Wontonimo: hey ...

Wontonimo: to be clear, my usual method to these things is to talk about the network like I did

Wontonimo: then come up with some theories, and then test them

Wontonimo: and then a lot of times I wrong

Wontonimo: when then allows me to learn lol

Wontonimo: if you have the CPU, try ReLU then leakyReLU,

Wontonimo: clipped ReLU

Wontonimo: and honestly , i don't think i helped yet ;)

Wontonimo: thanks for chatting

doomento: I'm clueless

Astrobytes: I'm Astrobytes.

doomento: I don't know what to do on "The Descent". . . or really anything.

Astrobytes: May I recommend starting here: https://www.codingame.com/playgrounds/55547/how-to-get-started-on-codingame

Astrobytes: Should clarify a few things

doomento: Ok I will start reading it now.:nerd: thanks

BOTAlex: I unexpectedly created an ai that ranked up from wood 2 to gold 3 (Coders strike back)

Default avatar.png FunkyFrenchman_e730: Anyone got like some youtube videos

Wontonimo: cool BOTAlex

Wontonimo: what was the algo?

Wontonimo: oh, it looks like you are using the -3v algo

BOTAlex: i have no idea which or what it is named. I just made the thing stop drifting that much.