Click here to Skip to main content
13,792,957 members
Click here to Skip to main content
Add your own
alternative version

Tagged as

Stats

8.3K views
9 bookmarked
Posted 25 Apr 2018
Licenced CPOL

DARL – AI and fuzzy logic rule language examples – scripting Bots

, 30 Apr 2018
Rate this:
Please Sign up or sign in to vote.
DARL is a fuzzy logic expert system language that can be composed from fragments selected from a database. We show you how with examples

Introduction

DARL is a language for creating fuzzy logic rules that can describe knowledge of some kind, such as legal knowledge, business processes, etc. It is an extremely simple language, written so that non-programmers can easily understand and create rule sets embodying their knowledge.

This follows on from my previous article about DARL.

Background

DARL is composable

Unlike many other languages, within the fundamental structure of a piece of DARL code – the ruleset – there is no required order to the elements. All the rules and definitions within a rule set can occur in any order. The sequence of operation is determined by the inference engine – effectively the compiler for DARL. This makes it particularly easy to compose DARL from multiple sources.

During the runtime evaluation rules relating to particular outputs are collected together and a dependency map is generated. Rules for the same output are fuzzy “ored” together when evaluated,  giving defined and predictable results when rules are aggregated from multiple sources.

If a common data dictionary exists, it’s possible to treat DARL as a series of fragments that are assembled at a particular time for a particular purpose. To use some kind of Database structure to hold DARL fragments and to assemble them for an inference task.

Example of composability with Bots

DARL also forms part of the DARL Bot Service, (DBS) a bot generation system provided in SaaS form. The intelligence in DBS is provided by the DARL language and inference engine. DARL is used in two different ways. An NLP front end recognises word or concept sequences that form part of the Bot Model. Each of these has a DARL fragment associated with it, which is processed to determine how to respond to that particular phrase. These fragments may in turn call full blown DARL rulesets and hand over the conversation to them. This is the second way that DARL is used within the DBS.

Stores and fragment examples

One problem with a ruleset based approach to AI, and it’s a common problem throughout AI, is that the type and number of inputs and outputs and their possible states must be known in advance before rulesets are created, neural networks are trained, etc. This means that accessing dynamic data sources, where names and data types are not known in advance, is problematic.

Stores were added to DARL to get around this problem.  Stores are Read/Write interfaces that can call some external process asynchronously, like a remote REST interface, or in the case of Bots the local storage associated with a conversation.

Store definition:

store <store name>

Store usage:

<store name>[<list of parameters>]

Currently these stores are defined within the DBS:

Name

Usage

Read/Write

UserData

Bot user data collected by the Bot Framework

Read/Write

ConversationData

Bot conversation data collected by the Bot Framework

Read/Write

PrivateConversationData

Bot private conversation data collected by the Bot Framework

Read/Write

Bot

Constants, like the name of the bot or your website set through bot model editing.

Read only

Value

Values collected through the current text sequence

Read only

Call

Interface to call a ruleset

Write only

Word

Gets a word definition from WordNet

Read only

Rest

Calls a remote REST interface if secured or current user has access

Read/Write

Collateral

Gets a predefined piece of Markdown and returns it

Read only

Stores and fragment examples

In order to create an inference from a DARL fragment we need to insert it into a ruleset skeleton. In the DBS this is generated dynamically from the set of installed stores, but the result is like this:

ruleset botRuleset
{
 store UserData;
 store ConversationData;
 store PrivateConversationData;
 store Bot;
 store Value;
 store Call;
 store Word;

 output textual response;
 output textual link;

 /*%% rule_insertion_point %%*/
}

Note that the free DARL interface has a linter that checks DARL code, and this specifically handles DARL skeletons so that you can grammar check DARL fragments by providing the source to be checked, the skeleton code, and the insertion locator (in the above case “/*%% rule_insertion_point %%*/”). See https://Darl.ai/swagger

Fragments and text sequences

The way that the DBS processes incoming text is beyond the scope of this article, but this diagram shows the general process.

The textual processing handles both matches and partial matches and builds a stack of default responses to partial matches as the tree of text/concept elements is traversed.

When the text is exhausted the best complete match’s DARL fragment is selected, or if no complete match, the top default fragment on the stack.

To execute the fragment, it is inserted into a dynamically generated skeleton and the result is applied to the DARL inference engine.

Examples of Fragments

All these fragments use the skeleton shown above. The DBS automates simple processes like creating a response or calling a ruleset, but we will show the DARL fragments in full.

Simple text response

if anything then response will be "Dr Andy is a character, like Colonel Sanders, Aunt Betty or Ronald McDonald. There is a human DR Andy, but the character is a composite of him and his technology";

Randomly selected responses

if anything then response will be randomtext("hello, can I help?", "hi, what can I do for you?")

Response inserting values into text

output textual time;
if anything then time will be Bot["time"];
if anything then response will be document("%% time %% UTC",{time});

This declares a temporary output, time, reads the time from the Store Bot, field time, and inserts it into wrapper text.

Capturing values

if anything then UserData["age"] will be Value["value:number"];
if anything then response will be "Thanks, I'm much younger";
if anything then ConversationData["topic"] will be "age";

This is in response to “I am <value:number>”, so, for instance, "I am 54" where the value 54 is extracted into the Value store.

Note that a topic value is stored to potentially disambiguate future responses.

Calling a ruleset

 if anything then Call[""] will be "call_andy.rule";

This passes control of the conversation to the “call andy” ruleset which takes a message and emails it on.

Calling other stores

output textual val;
if anything then val will be Value["value:"];
if anything then response will be Word[val];

This calls the Word store which performs a WordNet lookup of the meaning of the words after “what is a”. For instance, “what is a palindrome” is replied with “a word or phrase that reads the same backward as forward”.

Returning data conditioned on stores

if UserData["age"] is present then response will be UserData["age"];
otherwise if anything then response will be
  randomtext("I'd like to know how old you are.","You haven't told me your age.","I don't know.");
if anything then ConversationData["topic"] will be "age";

In this case, triggered by "how old am I" we check the UserData store for age data and respond with that if it's present.  This illustrates how you can check for the presence of any input, output or store. There is a matching keyword absent for the other case.

Summary

DARL has many unique characteristics. The one demonstrated here is the ability to compose DARL rule sets from multiple sources. Although I’ve shown examples from the DBS, the facility is supported by DARL itself rather than the DBS, so you can make use of this capacity using the free darl API.

History

Initial version 4/25/2018

added further example 4/30/2018

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

AndyEdmonds
United Kingdom United Kingdom
No Biography provided

You may also be interested in...

Comments and Discussions

 
QuestionOPS5 anyone? Pin
gaujaai5-May-18 23:27
membergaujaai5-May-18 23:27 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Permalink | Advertise | Privacy | Cookies | Terms of Use | Mobile
Web05 | 2.8.181207.3 | Last Updated 30 Apr 2018
Article Copyright 2018 by AndyEdmonds
Everything else Copyright © CodeProject, 1999-2018
Layout: fixed | fluid