程序代写代做 AI computer architecture assembly html Department of Electrical & Computer Engineering

Department of Electrical & Computer Engineering
University of California, Davis

EEC 170 – Computer Architecture
Winter Quarter 2020
Laboratory Exercise 2: RISC-V Computer Arithmetic
Due Date: Feb 19, 2019
Full Points 150

Objectives
The objectives of this exercise are four fold.
• You will learn how RISC-V assembly programming, especially the use of shift and logical operations like and, or.
• You will learn how to emulate floating point operations in software if the hardware does not support them
• You will learn about a new floating point standard called bfloat16 that is being increasingly used in machine learning applications where we need a huge range but the precision can be lower.
• Finally, this exercise will prepare you for the midterm exam, as you would have mastered chapter 2 and chapter 3 with a successful implementation of Lab 2.
Background
The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format with the intent of accelerating machine learning and near-sensor computing.[1] It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. The bfloat16 format is utilized in upcoming Intel AI processors, such as Nervana NNP-L1000, Xeon processors, and Intel FPGAs,[2][3][4]Google Cloud TPUs,[5][6][7] and TensorFlow.[7][8]
Description of the Project
Your processor does not have a floating point hardware. But, your application requires floating point arithmetic. Your task is to implement an efficient function to realize floating point addition and floating point addition. You will assume the bfloat16 representation. This representation is identical to the IEEE-784 standard discussed in the class and the book, except that the mantissa is only 7 bits, so the entire representation fits in 16 bits.
What you need to do?
• Write RISC-V assembly code and verify using the Venus RISC-V simulator.
• You are given a skeleton file that consists declarations for inputs x and y and output z, and operation to be performed. If (operation = 0) z = x + y else z = x – y
• z should be a valid bfloat16 representation of the answer, which means it should be normalized. Inputs x and y are also normalized bfloat16 representations.
• Input and output is done in decimal integer representation. So, the bfloat16 representations of x and y are input to your program as decimal integers and your output should also be a decimal integer corresponding to the bfloat16 representation of z.
• You can ignore rounding, overflow, and underflow, but make sure test cases where one of the operand is a zero.
• TA will provide additional test cases.
Useful Resources on the Web
https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
https://oletus.github.io/float16-simulator.js/
https://www.rapidtables.com/convert/number/binary-to-decimal.html
https://www.binaryhexconverter.com/decimal-to-binary-converter
Skeleton File and Sample Results
.data
x: .word 16192 # 0.75= 16192
y: .word 16544 # 5 = 16544 
z: .word 0 # expected result 0.75-5= -4.25 which is 49312 if operation is 1
operation: .word 1 #0 is add, 1 is subtract

.text
la t0, operation
lw t0, 0(t0) # t0 contains the flag whether to add or subtract
la t1, x
la t2, y
lw t1, 0(t1) # t1 has x in bfloat16 representation
lw t2, 0(t2) # t2 has y in bfloat16 represenation

#your program starts here



#store the final result in z



#print the result as z as decimal integer and exit
print:


#exit
addi a0, x0, 10
ecall