Saturday, October 1, 2016

Progress and Vikas And Intellect Process in terms of current language and terminology

Progress and Vikas and Intellect Process in terms of current language and terminology


INPUT  --->  PROCESSING --->  OUTPUT 
 

1. [Time/Space/Matter/Energy/UnrecordedData] ==> (Events/sense impression/Action) ==> [SingleEventMemoryImpression=Data]
2. [Collection of Different Data] ==> (Diversity) ==> [DataRecord/Words and Meaning of Words of Natural Language] 
3. [DataRecord] ==> (Structure=Rules/Organization/Processes/Symbols/Tokens/ObjectiveProcessing/Structure of Sentence of Natural Language) ==> [Information] 
4. [Information] ==> (Relation And Connection between sets of information and storage/Natural Language Model) ==> [Information Relational Database and Communication Networks]
5. [Information Relational Database] ==> (Pattern-Recognition/Feedback/Learning) ==> [Intelligence]
6.[Intelligence] ==> (Purpose/Meaning/Tradition/Values/Principles/Morals = SubjectiveProcessing) ==> [Personal-Knowledge/Personal-Understanding]
7. [Context driven Knowledge] ==> (Outer-worldy Experience and Universal Perspective and wide & longterm view) ==> [Wisdom]
8. [Context driven Wisdom] ==>(Personal-Inner-subjective Experience) ==> [Intution/Inspiration/Grace]
9. [Context driven Grace] ==> (Experience of Unity Consciousness) ==> [Return to Source]



PROCESSING  ELEMENTS EQUIVALENCE 
 
a. Recording/Reacting Manas/Mind and Senses = Objective Processing of (Events/Sense Impression/Action)  
b. Neural Synapses and Networks = Objective Connection and Networks and Memory
c. Buddhi/Reasoning LEFT MIND = Objective Processing of (Structure/Logic)
d. Buddhi/Reasoning RIGHT MIND = Objective Processing of 
(Pattern-Recognition/Feedback/Learning) 
e. Ahankaar/Emotional/Ego Mind = Subjective Processing of (Meaning)
f. Universal Consciousness = Subjective Processing of (Outer-worldly Experience)
g. Cosmic Consciousness = Subjective Processing of (Inner-subjective Experience)
h. Holy Spirit = (Unity Consciousness) 

The Higher Subjective Realms of Awareness starting
from Deep_Learning cannot be analyzed logically in parts
and have to treated in a holistic way

**************************************** 
Application to Robotics 
using Mathematics
****************************************
 
1. Senses/Action [Domain Knowledge] - Data Signal Processing for Data from/to Sensors/Actuators/Motors
[ Mechanics and Robotic Control, Instrumentation, Digital Signal Processing, Digital Image Processing, Sensor Signal Processing, Computer Vision ]

2. Diversity - Signal Quantization, Statistics, Data Redundancy Processing using Information Theory, Data Preprocessing, Feature Engineering, Discrete Mathematics

Statistics Central Limit Theorem:  

Application To Evolution Theory -  

Means of Sampling distribution of representatives  
from different groups have normal distribution  
so diversity also produces normal distribution 
of entire population,
Language Word Dictionary and Meaning.

3. Structure/Processes - Math/Logical Structures and Math/Logical Processing - Computer Data Structures and Algorithms, Numerical Methods for Computational Science, Finite Automata, Matrix Algebra-Matrices & Tensor, Calculus, Monte-Carlo Method, Markov Chains, Computer Operating Systems,
Language Sentence Structure.

4. Connection/Network - Set Theory/Graph Theory/Network Theory/Language Structure and Language Model.

5. Intelligence - Machine Learning: Deep Learning, Feedback Control Theory, Pattern-Recognition, Neural-Networks, Statistical Learning, Supervised Learning.

6. Meaning - Seemingly Value and Subjective Decision Making - Fuzzy Logic, Speech Recognition, Natural Language Processing - NLP-NLTK, NLP Transformers, Generative AI, GPT-n, ChatGPT, OpenAI AI models, Artificial-Intelligence-Large-Language-Models (Language is connection between objective and subjective - example -
Large Language Models LLMs are connection between subjective human intelligence and objective computer processing and larger the internal memory of LLM then LLM is closer to human intelligence -
ChatGPT3 - 15 billion parameter memory, ChatGPT 3.5 - 175 billion parameter memory, ChatGPT4 - 1.76 trillion parameter memory, ChatGPT-5 54 trillion parameter memory. Next factor that affects LLM getting closer to human intelligence is ATTENTION SPAN [number of tokens] that is used to scan the data, and larger the ATTENTION SPAN then LLM is closer to human intelligence which is obvious.  Another factor that affects LLM getting closer nearer to human intelligence is training data of how many books, websites, literature, video, audio, multimedia is used to train model ) 

Example1:

********************************************  
Reading Comprehension VIEW-S-T-A-MP
********************************************
The acronym uses letters from each of the
following five critical elements: 
VIEW-S-T-A-MP

Level 1. READ the WORDS of PASSAGE
Level 2. [VIEW] - Diversity - VIEW = the different VIEWpoints in the passage
Level 3. [S] - Structure = the Structure of the passage
Level 4. [T] - Connections between Words and Meaning
    T = the Tone of the passage
    Types of tones - empathetic, passionate, boring.
Level 5. [A]- Learning from the passage
    A = the Arguments in the passage
Level 6. [MP] - Meaning conveyed from the passage
    MP = the Main Point

Killoran, David M.; Stein, Steven G..
The PowerScore LSAT Reading 
Comprehension Bible 2017 Edition
 (The PowerScore LSAT Bible Series)

Example2: 
***************************************
Data Science Essentials in Python -  Dimitry Zinoviev 
****************************************
Collect → Organize → Explore → Predict → Value
Level 1/2. Collect = Collection, Diversity
Level 3/4. Organize = Structure, Organization, 
Processes, Connections, Networks
Level 5. Explore = Pattern Recognition
Level 5. Predict = Learning and Intelligence
Level 6. Value = Meaning

Example3:   
*************************************** 
Study Using Mind Maps - Tony Buzan (Inventor) / Toni Krasnic
**************************************** 
1. Preview - Exposure of Senses Perception for data collection of Theory under Study 
2. Participate - Diverse Views And Associations from data of Theory under Study
3.Process - Structure / Function / Processes / Connections / Networks from Views and Associations of Theory under Study
4. Practice - Learning and Feedback from Structure and Processes of Theory under Study
-   Retain in Memory - Purpose, Meaning and Value associated with Learning of Theory
5. Produce - Experience And Applications from Purpose, Meaning and Value of Theory under Study (can be Universal, Cosmic, Unity) - Create Mind Maps

Example4:   
*************************************** 
Question for ChatGPT-4: What are the key skills required for data analysis?
*************************************** 

Level 1.  Domain Knowledge: Understanding the industry you are working in can greatly enhance the quality and relevancy of your data analysis.

Level 2  Data Cleaning: This involves cleaning, structuring, and enriching raw data into a desirable format for better decision-making in less time. 

Level 2  Statistical Knowledge: Understanding of statistical concepts and methods is critical to interpreting data and drawing conclusions.

Level 3   Programming Skills:  Knowledge of programming languages like Python and R is highly beneficial as many data analysis tasks are carried out programmatically.

Level 4  Database Management: Familiarity with SQL or other database query languages, as well as an understanding of database design, is important for working with large amounts of data.

Level 4  Communication Skills: Good communication skills are necessary to effectively explain complex data findings to non-technical stakeholders. 

Level 5. Machine Learning: Although not always required, having an understanding of machine learning techniques can be a valuable tool in a data analyst's arsenal, especially for making predictions or identifying patterns.

Level 7.  Critical Thinking: The ability to ask the right questions, identify key problems, and make informed decisions based on data is crucial.

Level 8. Data Visualization: The ability to present data in a visual format that is easy to understand can be crucial in communicating your findings. This might involve using tools like Tableau, Power BI, or matplotlib in Python.

Level 9  Attention to Detail: Being detail-oriented is important in data analysis as accuracy is critical when it comes to data, calculations, and analysis results.


Food for Thought 1: 
It is important to mention that ChatGPT-4 AI did not describe the most important Level of Attention ("Level 6 - Moral and Ethical aspects of Data Analysis").
  

Food for Thought 2:

Also the fact that ChatGPT-4 AI is able describe complex learning mechanisms and levels of attention implies that ChatGPT-4 AI is not just "Stochastic Parrot" - (Geoffrey Hinton & Yann LeCun) but it does "understand" the real world. (Geoffrey Hinton).

-------------------------------------------------------------

ChatGPT-4 : What are moral and ethical aspects of Data Science?


1. Privacy: This is probably the most prominent issue. When dealing with personal data, it's crucial to respect the privacy of individuals. Data should be anonymized and personally identifiable information (PII) should be securely handled to prevent any misuse.
2. Consent: Related to privacy is the matter of consent. Before collecting or using data, especially sensitive or personal data, it's generally seen as necessary to obtain informed consent from the individual involved. This is a basic requirement of respect for individual autonomy.
3. Transparency: Data science can sometimes be a "black box," where it's not clear how data are being used or how decisions are being made. This lack of transparency can be a problem, especially when data science is used in decision-making that affects people's lives.
4. Bias and Fairness: Algorithms can inadvertently perpetuate and amplify human biases, particularly if those biases are present in the data used to train them. It's crucial to be aware of these biases and try to mitigate them, and to test algorithms for fairness in their outcomes.
5. Accountability: When things go wrong—as when an algorithm makes a decision that harms someone—it's crucial to have clear lines of accountability. Who is responsible for the mistake? How can they be held accountable? These questions are not always easy to answer in the context of data science.
6. Data Quality and Integrity: Garbage in, garbage out is a principle in computing that applies particularly well to data science. If the data used are of poor quality or are erroneous, then the results will be too. It's an ethical requirement to strive for data quality and integrity, so as not to mislead or cause harm with faulty results.
7. Data Access and Inequality: There can be inequality in terms of who has access to data, who can use it, and who benefits from it. This can contribute to social and economic inequalities. Moreover, large tech companies and governments often have access to far more data than individuals or smaller entities, raising issues about power imbalances.

8. Responsible Communication of Results: Data science findings can be misinterpreted or misrepresented, either intentionally or unintentionally. It is essential to communicate findings in a responsible manner, avoiding over-generalization, exaggeration, or causing unnecessary alarm or harm
*******************************************
Example 5:  “The Data Science Hierarchy of Needs”
Excerpt From: Joe Reis and Matt Housley. “Fundamentals of Data Engineering.”  - 6/22/2022



 






















Level 1/2/3. Foundations: Data Collection,  Data Diversity and Inclusion, Data Structure and Processes
Level 4: Connections - Move and Store - Networks, Graphs, and Relational Database Storage 
Level 5  Learning - Pattern Recognition - Explore and Transform
Level 5  Learning - Optimization 
Level 5  Intelligence - Deep Learning and AI  

***********************
Example 6 -  Data Foundations to Wisdom Peak

Source: https://www.ontotext.com/knowledgehub/fundamentals/dikw-pyramid/ 


Level 1/2. Data Foundations - Data Records and Variety.
Level 3/4. Information - Data Structure, Processes, Connections, Communication.
Level 5.  Knowledge (and Intelligence) - Creative Learning, Pattern Recognition, Intelligence.
Level 6. Wisdom Peak - Subjective Value-based Decision with Wider and Longterm view.

***********************************

Applications of AI:

Artificial intelligence reduces a 100,000-equation quantum physics problem to only four equations
by Thomas Sumner, Simons Foundation
source: https://phys.org/news/2022-09-artificial-intelligence-equation-quantum-physics.html


More information:
Domenico Di Sante et al,
Deep Learning the Functional Renormalization Group,
Physical Review Letters (2022).
DOI: 10.1103/PhysRevLett.129.136402




A visualization of a mathematical apparatus used to capture the physics 
and behavior of electrons moving on a lattice. 
Each pixel represents a single interaction between two electrons. 
Until now, accurately capturing the system required 
around 100,000 equations—one for each pixel. 
Using machine learning, scientists reduced the 
problem to just four equations. That means a similar
 visualization for the compressed version 
would need just four pixels. 

Credit: Domenico Di Sante/Flatiron Institute
------------------------------------------------------------------------------------------

******************************************** 
Humanity and Technology at Singularity
********************************************
Before the intelligence of technology can take over humanity at the singularity,
Raise the consciousness of humanity to levels above capability of technology which
Technology cannot understand and humans  never lose which is pure consciousness, pure attention, pure care.

IF BY CHOICE, Artificial Intelligence (optimization and decision making)
in machines has to be beneficial for, rather than adversarial towards,
- social networks, social businesses, humans -
then values (gained through progress of human evolution)
that humans prize for, should be programmed into
AI training data (training data and features selection and also their influence in decision making) and
algorithms (different perspectives
of each of results of different optimizations).
- TED talk


Geoffrey Hinton (GodFather of AI) on dangers of AI -

Among his concerns: 

  • Generated media could erode the average person’s ability to gauge reality.
  • AI models could cause massive unemployment by automating rote work, and perhaps not-so-rote work.
  • Automated code generators eventually could write programs that put humans at risk.
  • Hinton supports global regulation of AI but worries that it would be ineffective. Scientists probably can devise more effective safeguards than regulators, he said.
AI and Humanity nearing Singularity
(Approaching Singularity -
How is ChatGPT aware enough to not take
on blame on itself for its actions
based on model when model is
trained on ill-intentioned or evil-bias-data?
Food for thought - Does this would not
prove that Awareness and Consciousness
have levels of attention even in
living beings and machines?)


नेहाभिक्रमनाशोऽस्ति प्रत्यवायो न विद्यते |
स्वल्पमप्यस्य धर्मस्य त्रायते महतो भयात् || 2.40||

Translation
B.G 2.40: Working in this state of consciousness (States of Evolution),
there is no loss or adverse result,
and even a little effort saves one from great danger.

- Bhagawad Geeta
*********************************************

Nvidia CEO predicts the death of coding — Jensen Huang says AI (and developers of AI) will do the work, so kids don't need to learn coding (from layer 2 to layer 5 when coding can be automated using Generative AI coding assist tools) 


This implies that future of employment is in 
Layer 1. Expertise of Domain Knowledge,
Layer 1. Field Experience in Application Domain,
Layer 1. Application of AI to Medical, Pharmacology, Weather and Global Warming Prediction, Movies and Sound Creation, Metallurgy, Robotics, and so on,
Layer 1. Prompt Engineering when in-depth domain knowledge is available.
Layer 4. Communication Skills,
Layer 6. Moral and Ethical Aspects of Data Science And AI,
Layer 7. Critical Thinking, creative ideas, new ways of education using critical thinking, new applications of AI.
Layer 7. Prompt Engineering when in-depth domain knowledge is not available,
Layer 8. Panoramic and Long term view of AI applications, system integration,
Layer 9. Attention to Detail of all aspects of Data Science and AI.

***************************************

AI Models Are Converging Towards the Same Representation of the World.


https://arxiv.org/pdf/2405.07987





















Machines and Living Beings

Machines are assembled or designed by living beings.
Living beings are the creators of own self,
Machines have levels of awareness but levels are distinct,
Living beings have awareness and awareness levels are continuous,
Machines cannot have subjective experiences,
Living beings can have subjective experience.
Machines can never achieve enlightenment,
Only humans at height of consciousness can achieve enlightenment.

**************************************
Improved Response of new AI model - ChatGPT-4o (omni)

(proving that newer AI models are approaching same representation of the real world)

ChatGPT-4o : What are key skills required for Data Science?

Data science is a multidisciplinary field that requires a combination of technical, analytical, and soft skills. Here are some key skills that are essential for a successful career in data science:

### Technical Skills:
1. **Programming Languages:**
- Proficiency in programming languages such as Python and R is crucial. These languages are commonly used for data manipulation, analysis, and building machine learning models.

2. **Statistical Analysis:**
- A solid understanding of statistics, including hypothesis testing, probability distributions, and regression analysis, is essential for interpreting data and making informed decisions.

3. **Data Manipulation and Cleaning:**
- Skills in data wrangling, cleaning, and preprocessing using tools like Pandas (Python) or dplyr (R) are important for preparing data for analysis.

4. **Machine Learning:**
- Knowledge of machine learning algorithms and techniques, such as classification, regression, clustering, and deep learning, is necessary for predictive modeling.

5. **Data Visualization:**
- Proficiency in data visualization tools and libraries, such as Matplotlib, Seaborn, Tableau, or Power BI, is important for presenting insights in a clear and understandable manner.

6. **Database Management:**
- Experience with SQL and NoSQL databases, as well as data extraction and querying, is important for managing and accessing large datasets.

7. **Big Data Technologies:**
- Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks can be beneficial for working with large-scale data.

### Analytical Skills:
1. **Critical Thinking:**
- The ability to approach problems logically and analytically, identifying patterns and drawing meaningful conclusions from data.

2. **Problem-Solving:**
- Strong problem-solving skills to develop innovative solutions and strategies based on data insights.

3. **Domain Knowledge:**
- Understanding the specific industry or domain in which you are working can help in contextualizing data insights and making relevant recommendations.

### Soft Skills:
1. **Communication:**
- Effective communication skills to explain complex data insights to non-technical stakeholders and collaborate with cross-functional teams.

2. **Curiosity and Continuous Learning:**
- A natural curiosity and willingness to stay updated with the latest trends, tools, and techniques in data science.

3. **Teamwork:**
- The ability to work well in teams, as data science projects often involve collaboration with other data scientists, engineers, and business professionals.

4. **Attention to Detail:**
- Precision and attention to detail are important for ensuring data accuracy and integrity.

### Project Management Skills:
1. **Time Management:**
- The ability to manage time effectively, prioritize tasks, and meet deadlines.

2. **Project Planning:**
- Skills in planning and executing data science projects, including defining objectives, milestones, and deliverables.

Developing a combination of these skills can help you become a well-rounded data scientist capable of tackling complex data challenges and making impactful decisions.

**************************************
ChatGPT-4o : What are the real world applications of large language models (LLMs)

Large Language Models (LLMs) like GPT-3, GPT-4, and similar models have a wide array of applications across various industries. Here are some key real-world applications:

### 1. **Customer Support and Chatbots:**
- **Automated Customer Service:** LLMs can power chatbots that handle customer inquiries, provide support, and resolve issues efficiently.
- **Live Chat Assistance:** They can assist human agents by suggesting responses and providing relevant information during live chat sessions.

### 2. **Content Creation:**
- **Writing and Editing:** LLMs can generate articles, blog posts, reports, and other types of content. They can also assist with editing and proofreading.
- **Creative Writing:** These models can help in writing stories, poems, and scripts, providing inspiration or even drafting entire pieces.

### 3. **Education and E-Learning:**
- **Tutoring:** LLMs can act as virtual tutors, helping students understand complex concepts, answering questions, and providing explanations.
- **Content Generation:** They can create educational materials, quizzes, and summaries of academic content.

### 4. **Healthcare:**
- **Medical Documentation:** Assisting in drafting medical reports, patient summaries, and other documentation.
- **Patient Interaction:** Providing preliminary consultations, answering patient queries, and delivering health information.

### 5. **Business and Finance:**
- **Market Analysis and Reports:** Generating financial reports, market analysis, and business insights based on data.
- **Customer Interaction:** Assisting in customer relationship management by generating personalized responses and communications.

### 6. **Software Development:**
- **Code Generation and Assistance:** Helping developers by generating code snippets, providing documentation, and debugging code.
- **Documentation:** Creating and maintaining technical documentation and user manuals.

### 7. **Translation and Localization:**
- **Language Translation:** Providing real-time translation services for text and speech.
- **Localization:** Adapting content to different cultures and languages, ensuring it resonates with local audiences.

### 8. **Research and Development:**
- **Literature Review:** Summarizing and synthesizing research papers and articles.
- **Hypothesis Generation:** Assisting in formulating research hypotheses and experimental designs.

### 9. **Entertainment:**
- **Gaming:** Creating dialogues, narratives, and interactive stories for video games.
- **Interactive Experiences:** Developing virtual assistants and characters for immersive experiences in VR and AR environments.

### 10. **Legal and Compliance:**
- **Document Review:** Analyzing legal documents, contracts, and compliance reports.
- **Legal Research:** Assisting in legal research by summarizing case laws and legal precedents.

### 11. **Personal Productivity:**
- **Virtual Assistants:** Enhancing virtual personal assistants (e.g., Siri, Alexa) to provide more natural and context-aware interactions.
- **Task Management:** Assisting with scheduling, reminders, and task organization.

### 12. **Marketing and Sales:**
- **Copywriting:** Creating persuasive marketing copy, product descriptions, and ad content.
- **Customer Insights:** Analyzing customer feedback and sentiment to inform marketing strategies.

### 13. **Social Media Management:**
- **Content Scheduling and Creation:** Generating posts, captions, and managing social media calendars.
- **Engagement:** Interacting with followers and responding to comments and messages.

### 14. **Data Analysis:**
- **Data Summarization:** Summarizing large datasets and generating reports.
- **Natural Language Querying:** Allowing users to query databases using natural language.

### 15. **Accessibility:**
- **Assistive Technologies:** Enhancing tools for individuals with disabilities, such as voice-to-text applications and screen readers.

The versatility and adaptability of LLMs make them a powerful tool in transforming various aspects of both personal and professional life. However, it’s essential to consider ethical implications and ensure responsible use of these models to avoid misuse and mitigate potential biases.

***************************************************




No comments:

Post a Comment

Note: Only a member of this blog may post a comment.