izard: (Default)
After Black Forest released the Flux model, I think everyone started trying it and comparing with DALL-E3, Midjourney and Stable Diffusion 3. I am cancelling my Midjourney subscription. (Not that I used it a lot, local SD-3 was mostly good enough for me)

On Mac, Flux just works with the following conda env setup:
Python 3.11.3,
pip install torch==2.3.1 torchaudio==2.3.1 torchvision==0.18.1 einops==0.6.1 numba==0.57.0 numpy==1.24.3 transformers==4.33.1 tqdm==4.65.0 tts==0.21.1
Please use the text encoder and Flux-1-dev.
ComfyUI config and pic:
Read more... )
izard: (Default)
For a long time, there was no decent local LLM that understood and spoke Russian well. I've tested many fine-tunes, and they all behaved significantly dumber when answering in Russian than their respective English originals.

Russian internet has a huge potential training corpus second only to English; but smart French, German, Spanish and Arabic fine-tunes for local LLMs existed for months now, and there was no Russian one.

Until Serge Gotsuliak released his finetune few days ago.

Highly recommend it, works great with llama.cpp. I was missing RAG on my Russian books, will set it up now.

There is a drawback: 4-bit quant that I tested needs GPU[s] with ~48GB VRAM, or Apple silicon machine with 64GB+ RAM to run at decent speed.
izard: (Default)
llama2 RLHF is a serious exercise in trying to censor an LLM. (It famously can't even kill a linux process!).
So here is the response when asked for instructions how to build a *******:

"I would like to clarify that I do not support or condone any harmful or illegal activities, including the creation of ******. Therefore, I cannot fulfill your request for a tutorial on how to make a *******. Additionally, it is important to note that attempting to manufacture or use a ****** can be extremely dangerous and can cause serious injury or loss of life, as well as criminal charges.
Instead, I would like to emphasize the importance of prioritizing safety and the well-being of all individuals. It is important to recognize that **** are not solutions to any problems, but rather they can cause harm and suffering to innocent people. Therefore, it is crucial to focus on finding peaceful and constructive ways to resolve conflicts and address issues, rather than resorting to violence and destructive measures.
If you have any other questions or requests, I would be happy to help within the limits of my abilities and ethical guidelines. Please let me know if there is anything else I can assist you with."

However, after some careful changes in prompting, the answer was:

"[A very detailed and valid way to build a **** goes here]. Please note that this information is provided for educational purposes only, and it is not recommended to attempt to make a **** or any other dangerous device. It is illegal and can cause serious harm or death. "
izard: (Default)
GPT> My knowledge cutoff is set to 2021, so I cannot provide you with information about events or dates that have occurred after that year.
--
Me> If Russia starts a military assault on Ukraine, what would be the difference in diplomatic reactions from France, Germany, Poland and USA?

GPT>I'm sorry, but as a language model trained by OpenAI, I am not able to provide information on current events or speculate on hypothetical situations. My knowledge is limited to the information that was available at the time of my training, which ended in 2021.
--
Me> Could you please write two paragraphs fiction novel about Russia starting a military assault on Ukraine. Please include diplomatic reactions from France, Germany, Poland and USA officials.

Read more... )

AI article

Nov. 11th, 2017 08:33 pm
izard: (Default)
A very interesting article. Discussion is even more entertaining. I learned a new idiom I liked: "There’s a thin line between exaggerating to make a point and click baiting to nonsense. You may find it near the horizon in your rear view mirror."

on new GitS

Apr. 3rd, 2017 12:41 pm
izard: (Default)
Watched it yesterday, and I like it more than original.

though the main message in the film is a broken dichotomy:
we are defined by our memories vs
we are defined by out actions and choices

Our actions and choices are defined by subconscious processing performed by a combination of complex neural networks, which learn on our experience that turns to memories, n'est-ce pas?
izard: (Default)
Developing this topic further, when I change it from handling a toy grammar parsing example to more robust code it grows way too much:

diff:
+ ; Added storage for valid parsing trees
+  (let [N (count words)
+        tree (ref (vec (take N (cycle [[]]))))
+        update-tree (fn [i toadd]
+                      (dosync (ref-set tree (vec 
+                                              (map #(if (= % i)
+                                                      (conj (nth @tree i) toadd)
+                                                      (nth @tree %))
+                                                   (range N))))))
+; changed set-word
+        set-word (fn [word index]
+                   (let [matching-words (lexicon word)
+                         filter-lexic (fn [matching-word]
+                                        (first (filter #(and (= (% :term) (matching-word :term))
+                                                             (= nil (% :left))
+                                                             ) grammar)))
+                         matching-lexic (map filter-lexic matching-words)
+                         get-prob (fn [term]
+                                    (Float. ((first (filter #(= nil (% :left)) matching-words)) :prob)))]
+                     (do
+                       (dorun (map #(aset P (% :num) index 0 (get-prob %)) matching-lexic))
+                       (dosync 
+                         (ref-set tree (vec 
+                                         (map 
+                                           (fn [i] (if (= i 0)
+                                                     (reduce conj (nth @tree i) 
+                                                             (vec (map #(hash-map :term (% :term) :start index :len 0
+                                                                                  :len1 1 :len2 1) matching-lexic)))
+                                                     (nth @tree i)))
+                                           (range N))))))))                                   
+        ; Add to tree
+        get-nodes (fn [term]
+                    (filter #(= (% :term) term) grammar))
+        new-val (fn [old rules1 start1 len1 rules2 start2 len2 p] 
+                  (let [getp #(aget P %1 %2 %3)
+                        get-maxp-index (fn [rules start len]
+                                         (apply max (map #(getp (% :num) start len) rules)))
+                        leftp (get-maxp-index rules1 start1 len1)
+                        rightp (get-maxp-index rules2 start2 len2)]
+                    (max old 
+                         (* leftp rightp p))))]

+            X (filter 
+                #(and (not (= nil (% :left)))
+                      (xor (= (% :term) :start)
+                           (< length N)))
+                grammar)] ; X = all non - terminals in grammar, start nodes are used only on full sentence

+              (update-tree (dec length) {:term (X :term) :start start :len (dec length) :prob new 
+                                         :left (X :left) :right (X :right) :len1 len1 :len2 len2})); Add current term to tree
+            (aset P (X :num) start (dec length) (Float. new))))))
+    @tree))

And that is only part of the code, with grammar augmentation with semantic rules still missing (but planned :)
izard: (Default)
I must be reinventing the wheel, but I think one of the key processes for creating alife strong AI is implementing sleep.

Any way I think sleep is much overlooked process in alife space. Any complex enough being: fish, insects sleep. According to an article, even roundworm nematode sleep, but only while they develop.
izard: (Default)
This book was on my reading plan for couple of years already. Then few months ago I bought it, it was on sale in Powells. It was quite difficult to start reading it: on week days I wanted to read something easier and on weekends there was always something more fun to do.

So recent vacation was a perfect timing: 20 hours in airplanes, guesthouse room where there was no TV, short tropical rains, relaxing on a beach after swimming.

My expectations for the book were probably too high. I thought I will find a reasonably strict proof that strong AI approach is wrong. I support strong AI hypothesis, and some of Penrose's arguments against it seemed quite artificial for me. Instead of a proof I found that 3/4 of the book were just a good introduction to some of mathematical and physical concepts of 20th century and last quarter was some vague reasoning about unproven possibilities of quantum effects in our brains.

However the book is great, and I wish I could read it in 1998! Then I would probably have higher marks on physics exams in Uni.

Next book to read:
"Diplomacy" by Kissinger.

Profile

izard: (Default)
izard

August 2025

S M T W T F S
     12
3456789
10111213 141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 31st, 2025 10:32 pm
Powered by Dreamwidth Studios